WO2018108867A1 - Manipulateur mobile et procédé de commande d'un manipulateur mobile - Google Patents
Manipulateur mobile et procédé de commande d'un manipulateur mobile Download PDFInfo
- Publication number
- WO2018108867A1 WO2018108867A1 PCT/EP2017/082333 EP2017082333W WO2018108867A1 WO 2018108867 A1 WO2018108867 A1 WO 2018108867A1 EP 2017082333 W EP2017082333 W EP 2017082333W WO 2018108867 A1 WO2018108867 A1 WO 2018108867A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- manipulator
- mobile
- time
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 93
- 230000000694 effects Effects 0.000 claims description 19
- 238000001429 visible spectrum Methods 0.000 claims description 2
- 230000033001 locomotion Effects 0.000 description 19
- 238000013507 mapping Methods 0.000 description 10
- 239000012636 effector Substances 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000001454 recorded image Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000020169 heat generation Effects 0.000 description 1
- 231100000518 lethal Toxicity 0.000 description 1
- 230000001665 lethal effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
- B25J19/021—Optical sensing devices
- B25J19/023—Optical sensing devices including video camera means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
Definitions
- the present invention relates to a mobile manipulator, in particular a manipulator operating on a mobile, i. movable, platform is mounted, as well as a method for controlling such a mobile manipulator.
- a manipulator is a freely programmable, program-controlled one
- a mobile manipulator i. a manipulator that can change not only its orientation but also its location can be designed as a manipulator on a mobile platform.
- a commercial manipulator can be placed on a mobile platform to change its overall location.
- the movement of the mobile platform may be via wheels, rails, air bags or similar movement techniques.
- Mobile manipulators are used, for example, as driverless transport systems (FTS) in department stores for transporting objects and goods. The manipulator usually moves independently in the department store, attacks the
- Such areas can be lethal areas, such as a burning house or the surrounding area during a bomb disposal, or they can be areas that are very difficult or impossible to reach for humans, such as the surfaces of distant planets or the deep sea.
- the environment is usually unknown in such areas, so that the movement of the mobile manipulator can not be predetermined, ie preprogrammed. In these cases, the mobile manipulator must independently determine its position and direction so that it can perform the tasks assigned to it.
- SLAM Simultaneous Localization and Mapping
- Manipulator which has a mobile platform, a manipulator, which is arranged on the mobile platform, at least one camera, which on the mobile Manipulator is attached and provides a first image of an area of an environment at a first time, a controller that is configured to check whether the first image has sufficient features of the environment to perform an active SLAM method based on the first image wherein the controller adjusts the viewing range of the camera if the first image does not have sufficient features to provide, by the first camera, a second image having sufficient features to provide an active SLAM at a second time subsequent to the first time Method, and the controller is adapted to perform an active SLAM method using the first or second picture.
- the at least one camera is preferably pivotally mounted on the mobile manipulator, it can be independent of the direction of movement of the mobile
- Be aligned manipulator Thus, areas of the environment that are relevant to image processing can be viewed or searched for without the entire mobile manipulator having to be moved or rotated. This increases the flexibility of the mobile manipulator.
- the controller of the mobile manipulator is arranged to autonomously control the at least one camera in terms of its field of view in order to always ensure that a suitable image is available to the controller for carrying out the SLAM method.
- the viewing area of the at least one camera has to be adjusted in order to obtain an image that is suitable for carrying out the SLAM method. If the captured first image contains sufficient features for a SLAM method, that image is used. If there are not enough features in the first picture so that the SLAM method does not provide reliable results, then the viewing range of the camera is adjusted.
- the SLAM process is then performed on the second image that contains sufficient features for the SLAM process.
- the number of extracted features or their relevance can serve as a decision criterion. Fixed or dynamic thresholds can be set for the decision. Relevant features may be corners, edges, lines, circles, textures, colors, or other features used in image recognition feature recognition.
- Manipulators determine the position of the mobile manipulator and create a 3D map of the environment. This is particularly advantageous if positioning by means of a satellite-based positioning system such as GPS, GLONASS or Galileo is not possible. In buildings, deep houses or mountain canyons, the deep sea or on distant planets, the time signals of the positioning system satellites often can not or insufficiently received, whereby a satellite-based positioning system such as GPS, GLONASS or Galileo is not possible. In buildings, deep houses or mountain canyons, the deep sea or on distant planets, the time signals of the positioning system satellites often can not or insufficiently received, whereby a satellite-based positioning system such as GPS, GLONASS or Galileo is not possible. In buildings, deep houses or mountain canyons, the deep sea or on distant planets, the time signals of the positioning system satellites often can not or insufficiently received, whereby a satellite-based positioning system such as GPS, GLONASS or Galileo is not possible. In buildings, deep houses or mountain canyons, the deep
- the SLAM method performs positioning based on image data of the environment, which is also executable in areas without the above-mentioned timing signals. Position determination is thus possible by the SLAM method.
- creating a map is very helpful for another future "tour" of the area, and the elevation information contained in the created 3D map also provides additional useful information and a realistic view of the area.
- the viewing area of the camera for capturing the second image is aligned with the area of the environment exhibited by a zeroth image which has been provided at a zeroth time which is prior to the first time.
- the zeroth image is known to have sufficient features for a SLAM process.
- a "look back" of the camera to this area of the environment ensures that sufficient features for a SLAM method are again present in the following second recording and the SLAM method thus provides reliable results.
- the controller may execute at least one main activity of the mobile platform and / or the manipulator as well as the active SLAM method simultaneously.
- the simultaneous execution of the above activities allows a Main activity that requires a determination of position, can be carried out without interruptions, as the position is determined simultaneously using the SLAM method.
- the mobile manipulator can perform its tasks continuously and effectively.
- the main activity comprises moving the mobile platform, the manipulator, or both.
- a mobile manipulator such as underwater or in space
- Simultaneous execution here does not necessarily mean absolutely mathematically at the same time, but allows a so-called soft real time, i. an execution in certain minimum periods of time, but perceived by a viewer as simultaneously.
- a control priority of the main activity is higher than the
- Main activity continues to be performed during the adjustment of the gaze area of the first camera and / or the execution of the active SLAM process. Higher prioritization of the main activity ensures that this activity is given priority, as it is usually more important than the execution of the SLAM procedure.
- the main activity is given priority.
- at least one first camera is pivotally attached to the manipulator and at least one second camera is pivotally attached to the mobile platform.
- two individual cameras two different viewing areas can be recorded and analyzed simultaneously. This increases the flexibility of the system and allows for quick positioning and mapping, given the likelihood is increased, the at least one camera provides an image with sufficient features for the SLAM method.
- the cameras can be motorized both in a plane, as well as multi-dimensionally moved and adjusted, ie panned, tilted or zoomed.
- a camera can look for new images with distinctive features, e.g. in the direction of movement of the mobile manipulator, while the other camera remains aligned with an area of the many-feature environment to provide the SLAM method with meaningful images even when the mobile manipulator moves. If necessary, you can switch between the cameras.
- the viewing range of the first camera can be adjusted by using the possibilities of movement of the manipulator in addition to the movement space of the camera itself to bring the camera in a specific position. If, for example, the arm of the manipulator is moved upwards, a significantly increased camera position in relation to the mobile platform allows, for example, a wider viewing area of this camera in comparison to a camera which in
- the first and the second camera takes pictures in the visible spectrum as a 2D, ToF (Time of Flight) or stereo camera. Cameras in the visible
- Spectral range have the advantage that they are very inexpensive, small and well available or exchangeable.
- a review of the recorded image for a human observer is easily possible. In a training or a subsequent analysis so a viewer could meaningfully interfere with the control, without the complex image processing, i. Visualization techniques are necessary.
- the second camera provides a third image that has sufficient features for the SLAM process, and then the controller adjusts the gaze range of the first camera such that at a fourth time, that after the third time a fourth image is provided from the first camera showing the area surrounding the third image.
- a camera can be set up by the use of two cameras, images of new areas of the environment, preferably in
- Movement direction of the mobile manipulator to pick up with sufficient features whereupon then the controller aligns the camera, which was previously focused on an area of the environment with sufficient features, on the new area. It is thus achieved that images suitable for the SLAM process are captured from regions of the environment as near as possible and in a clearly visible manner.
- the mobile manipulator may "shimmy" from one image with sufficient features to a next image with sufficient features, preferably in the direction of movement of the mobile manipulator, allowing the camera to switch between the cameras, providing uninterrupted reliable positioning and mapping throughout the camera Locomotion of the mobile
- Manipulator can be achieved this way.
- a method for controlling a mobile manipulator wherein the mobile manipulator comprises a mobile platform, a manipulator mounted on this mobile platform, and at least one camera, and wherein the method comprises the steps of: a. Capturing by the camera a first image of a region of an environment at a first time; and b. Extract features from the first image; c. Determining whether the features extracted from the first image are sufficient to perform an active SLAM method based on the first image; d.
- Adjusting the field of view of the camera by controlling the mobile manipulator to provide, at a second time, subsequent to the first time, a second image having sufficient features of the environment to perform the active SLAM method based on the second image if the extracted features in the first image are insufficient.
- the adjustment of the viewing area can be done by motor control by driving the at least one camera by the controller.
- the at least one camera can be rotated, tilted or tilted to change their viewing area.
- the viewing area can also be changed by mechanical or digital zooming.
- the viewing area can be adjusted by, for example, "looking back" to areas of the environment known to have sufficient features there, and by orienting the camera's viewing area in a direction in which most of the current image is oriented It is also possible to initiate a free scan of the camera to areas of the environment that show sufficient features
- Movement of the camera in the free search can be predetermined after one
- the method further comprises the following step: adjusting the
- the second and zeroth image will not be coincident, but will largely show the same area of the environment. However, since most of the relevant features should be located in a wide area of the center of the image, the two images are approximately in number and relevance of the extracted
- the method further comprises the step of performing an active SLAM method to determine the position of the mobile manipulator and to create a 3D map of the environment.
- the positioning of the SLAM method by means of optical sensors has the advantage of being satellite-based
- Positioning can be dispensed with.
- the method further comprises simultaneously executing a
- the main activity comprises at least moving the mobile platform, the manipulator, or both.
- the mobile manipulator comprises at least a first camera and at least one second camera, and the method further comprises the following steps: Providing, by the second camera, a third image at a third time, the third image having sufficient features to perform an active SLAM process, and
- Fig. 1 is a schematic representation of an embodiment of a
- Figure 2 is a schematic image taken by a camera on the mobile manipulator and showing an object with many features from the environment of the mobile manipulator;
- 3a-3c are each a schematic diagram of Fig. 2 to different
- FIG. 1 shows an embodiment of a mobile manipulator 1, which has a mobile platform 10 and a manipulator 20 arranged thereon.
- Manipulator 20 is freely pivotable along its axes in all three spatial directions, taking into account predetermined limit parameters.
- An end effector and / or a camera 30 can be attached to the end effector receptacle of the manipulator 20.
- the end effector may be, for example, a gripping tool for gripping objects.
- a second camera 31 may be attached to the mobile platform.
- Both the first 30 and the second camera 31 can be adjustable, in particular pivotable, in two or three dimensions and can be designed, for example, as a tiltable and tiltable and zoom-capable camera.
- An adjustment of the viewing range of the camera 30, 31 can be done by moving, rotating, panning and / or tilting the camera or by zooming.
- the movement of the camera 30, 31 is calculated by the controller 12 at runtime of the control process of the main activity of the mobile manipulator.
- the SLAM method may be performed based on images of the first 30 or the second camera 31 or both.
- the wheel-driven mobile manipulator 1 in this embodiment can move freely, for example in a direction V.
- the movement forms a main activity of the mobile manipulator 1, especially during exploration, and this is done primarily, if the computer resources permit, preferably simultaneously, to the SLAM method.
- FIG. 2 shows a schematic image taken by a camera 30, 31 and showing a region of the environment around the mobile manipulator 1.
- the object 100 which is clearly visible in the image and fills large areas of the image, has many relevant features 40.
- Relevant features 40 may be edges, lines, circles, ORB features, or other features used for feature extraction in image processing. For visualization purposes, extracted features and feature regions 40 have been exemplified by the white ones
- a SLAM method is preferably a DPPTAM (Dense Piecewise Planar Tracking and Mapping) or an ORB-SLAM method.
- FIGS. 3a to 3c show images 60, 61, 62 of a camera 30, 31, for example the first camera 30, which were recorded at different times to, ti, t2 in different viewing directions.
- Fig. 3a represents the zeroth image 60 taken at a zeroth time to.
- the object 100 and the extracted features 40 are clearly and completely recognizable.
- the extracted features 40 are for the SLAM method for positioning and Mapping meaningful.
- the position of the mobile manipulator 1 can be determined and a mapping of the surroundings can take place.
- the mobile manipulator 1 moves by way of example in a direction V.
- the viewing direction of the camera 30 is initially maintained unchanged.
- the first image 61 shown in FIG. 3b is recorded.
- the object 100 is now only partly visible at the edge of the first image 61. Therefore, only a few relevant features 40 can be extracted from the first image 61.
- the majority of the first image 61 now forms a region 50 without relevant features 40.
- Such an area 50 can be, for example, a monochrome, textureless wall or surface.
- the extracted features 40 are now no longer sufficient for a SLAM method. Reliable positioning is therefore not possible on the basis of the first image 61 from FIG. 3b.
- the controller 12 may change the viewing area of the camera 30 or, for example, switch to another, second camera 31 to review an image taken by this second camera 31 and, if it has sufficient features, that image for the SIAM process to use.
- the algorithms that the controller 12 uses to check the images of the cameras 30, 31 can estimate, based on certain parameters, whether a
- the parameters may include, for example, the number of features 40 detected in an image, the spatial distribution of features 40, the change of features 40 relative to a temporally earlier image, the distance of features 40 from each other, etc.
- the viewing range of the corresponding camera 30, 31 can then also be directed to the area of the environment that the zeroth image 60 already showed.
- the second image 62 recorded in this way is shown in FIG. 3c. As can be seen, zeroth 60 and second image 62 must and will at least roughly coincide. Nevertheless, a vast number of relevant extracted features 40 in FIG both form 60, 62 match or at least be present.
- the second image 62 can then be supplied to the SLAM method and the position of the mobile manipulator 1 can be reliably determined.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Aviation & Aerospace Engineering (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
La présente invention concerne un manipulateur mobile (1), comportant une plateforme (10) mobile, un manipulateur (20) qui est agencé sur la plateforme (10) mobile, au moins une caméra (30, 31) qui est fixée au manipulateur mobile (1) et qui réalise à un premier instant (t1) une première image (61) d'une zone d'un environnement, une commande (12) qui sert à contrôler si la première image (61) comporte suffisamment d'attributs (40) de l'environnement pour mettre en œuvre un procédé SLAM actif sur la base de la première image (61), la commande (12) alignant le champ de vision de la caméra (30, 31) si la première image (61) ne comporte pas suffisamment d'attributs (40), pour réaliser à un deuxième instant (t2) qui est situé après le premier instant (t1) au moyen de la caméra (30, 31) une deuxième image (62) qui comporte suffisamment d'attributs (40) pour mettre en œuvre un procédé SLAM actif, et la commande (12) servant à mettre en œuvre un procédé SLAM actif au moyen de la première (61) ou de la deuxième image (62).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP17825420.7A EP3555722A1 (fr) | 2016-12-16 | 2017-12-12 | Manipulateur mobile et procédé de commande d'un manipulateur mobile |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102016225310.7A DE102016225310A1 (de) | 2016-12-16 | 2016-12-16 | Mobiler Manipulator und Verfahren zum Steuern eines mobilen Manipulators |
DE102016225310.7 | 2016-12-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018108867A1 true WO2018108867A1 (fr) | 2018-06-21 |
Family
ID=60935787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2017/082333 WO2018108867A1 (fr) | 2016-12-16 | 2017-12-12 | Manipulateur mobile et procédé de commande d'un manipulateur mobile |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3555722A1 (fr) |
DE (1) | DE102016225310A1 (fr) |
WO (1) | WO2018108867A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020077481A1 (fr) * | 2018-10-15 | 2020-04-23 | Lingdong Technology (Beijing) Co. Ltd | Système de véhicule autonome avec caméra orientable et indicateur |
WO2020099899A2 (fr) | 2018-11-14 | 2020-05-22 | Óbudai Egyetem | Robot de transport pour chaînes de fabrication |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120095619A1 (en) * | 2010-05-11 | 2012-04-19 | Irobot Corporation | Remote Vehicle Missions and Systems for Supporting Remote Vehicle Missions |
US20130226344A1 (en) * | 2012-02-29 | 2013-08-29 | Irobot Corporation | Mobile Robot |
US20160144505A1 (en) * | 2014-11-26 | 2016-05-26 | Irobot Corporation | Systems and Methods for Performing Occlusion Detection |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120070291A (ko) * | 2010-12-21 | 2012-06-29 | 삼성전자주식회사 | 보행 로봇 및 그의 동시 위치 인식 및 지도 작성 방법 |
DE102013211414A1 (de) * | 2013-06-18 | 2014-12-18 | Kuka Laboratories Gmbh | Fahrerloses Transportfahrzeug und Verfahren zum Betreiben einesfahrerlosen Transportfahrzeugs |
US9427874B1 (en) * | 2014-08-25 | 2016-08-30 | Google Inc. | Methods and systems for providing landmarks to facilitate robot localization and visual odometry |
-
2016
- 2016-12-16 DE DE102016225310.7A patent/DE102016225310A1/de active Pending
-
2017
- 2017-12-12 WO PCT/EP2017/082333 patent/WO2018108867A1/fr unknown
- 2017-12-12 EP EP17825420.7A patent/EP3555722A1/fr not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120095619A1 (en) * | 2010-05-11 | 2012-04-19 | Irobot Corporation | Remote Vehicle Missions and Systems for Supporting Remote Vehicle Missions |
US20130226344A1 (en) * | 2012-02-29 | 2013-08-29 | Irobot Corporation | Mobile Robot |
US20160144505A1 (en) * | 2014-11-26 | 2016-05-26 | Irobot Corporation | Systems and Methods for Performing Occlusion Detection |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020077481A1 (fr) * | 2018-10-15 | 2020-04-23 | Lingdong Technology (Beijing) Co. Ltd | Système de véhicule autonome avec caméra orientable et indicateur |
WO2020099899A2 (fr) | 2018-11-14 | 2020-05-22 | Óbudai Egyetem | Robot de transport pour chaînes de fabrication |
Also Published As
Publication number | Publication date |
---|---|
DE102016225310A1 (de) | 2018-06-21 |
EP3555722A1 (fr) | 2019-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102014102943B4 (de) | Robotersystem mit Funktionalität zur Ortsbestimmung einer 3D- Kiste | |
DE112005000929B4 (de) | Automatisches Abbildungsverfahren und Vorrichtung | |
DE212018000178U1 (de) | Objekterkennungssystem mit einem 2D-Farbbildsensor und einem 3D-Bildsensor | |
DE102015111080B4 (de) | Robotervorrichtung mit maschinellem Sehen | |
DE112018001050T5 (de) | System und verfahren zur virtuell erweiterten visuellen gleichzeitigen lokalisierung und kartographierung | |
EP2927844A1 (fr) | Estimation de position et de pose des objects 3d | |
WO2017088997A1 (fr) | Système de stationnement d'un véhicule | |
DE102019131261A1 (de) | Robotermanipulation unter verwendung eines unabhängig betätigten visionssystems, eines gegensteuerschemas und einer multi-tasking-architektur für tiefgehendes lernen | |
DE102019007001A1 (de) | FAHRZEUGBILDVERARBEITUNGSVORRICHTUNG, FAHRZEUGBILDVERARBEITUNGSVERFAHREN, PROGRAMM und SPEICHERMEDIUM | |
EP3555722A1 (fr) | Manipulateur mobile et procédé de commande d'un manipulateur mobile | |
EP2381207A1 (fr) | Mesure de cible 3D et orientation de cible à partir de données IR | |
EP3867796A1 (fr) | Procédé et dispositif de détermination d'une carte des alentours | |
EP1915239B1 (fr) | Procédé pour générer l'image d'un environnement | |
DE102022201279B3 (de) | Verfahren zum Erfassen einer Umgebung eines Fahrzeugs, Kameravorrichtung und Fahrzeug | |
EP2884746A1 (fr) | Dispositif de caméra de surveillance doté d'une détermination d'information sur le relief | |
EP3200154B1 (fr) | Procédé de détermination d'une position d'un objet | |
DE112020005735T5 (de) | Positionsabschätzungsvorrichtung, Fahrzeug, Positionsabschätzungsverfahren und Positionsabschätzungsprogramm | |
DE102020127797B4 (de) | Sensorverfahren zum optischen Erfassen von Nutzungsobjekten zur Detektion eines Sicherheitsabstandes zwischen Objekten | |
DE112019001454T5 (de) | Autonomes mobiles endgerät und autonomes mobiles system | |
EP3089076A1 (fr) | Procede d'alignement d'une unite d'agent actif sur un objet cible | |
DE102022206041A1 (de) | Verfahren zum Bestimmen von Objekten in einer Umgebung für SLAM | |
DE102022204515A1 (de) | Verfahren zum Bestimmen von Punktgruppen, die von einem vorgegebenen Betrachtungspunkt sichtbar oder nicht sichtbar sind | |
WO2020084160A1 (fr) | Dispositif et procédé de détermination de localisation dans un modèle 3d d'un environnement | |
EP4229597A1 (fr) | Procédé et dispositif pour cartographier un environnement de déploiement pour au moins une unité mobile et pour localiser au moins une unité mobile dans un environnement de déploiement, et système de localisation pour un environnement de déploiement | |
WO2024125918A1 (fr) | Détection d'emplacement d'objet à l'aide d'un processus d'extraction automatique de caractéristiques et/ou d'attribution de caractéristiques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17825420 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017825420 Country of ref document: EP Effective date: 20190716 |