EP3710781A1 - Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel - Google Patents

Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel

Info

Publication number
EP3710781A1
EP3710781A1 EP18773967.7A EP18773967A EP3710781A1 EP 3710781 A1 EP3710781 A1 EP 3710781A1 EP 18773967 A EP18773967 A EP 18773967A EP 3710781 A1 EP3710781 A1 EP 3710781A1
Authority
EP
European Patent Office
Prior art keywords
vehicle
rail system
virtual rail
signatures
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP18773967.7A
Other languages
German (de)
English (en)
Inventor
Stephan Simon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP3710781A1 publication Critical patent/EP3710781A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/04Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by terrestrial means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation

Definitions

  • the present invention relates to a method for automatically guiding a vehicle along a virtual rail system. Furthermore, the present invention relates to a method for automatically guiding a vehicle along a virtual rail system. Furthermore, the present invention relates to a method for automatically guiding a vehicle along a virtual rail system. Furthermore, the present invention relates to a method for automatically guiding a vehicle along a virtual rail system. Furthermore, the present invention relates to a method for automatically guiding a vehicle along a virtual rail system. Furthermore, the
  • the invention relates to a computer program that performs each step of the method when it runs on a computing device, and a machine-readable
  • Storage medium storing and entering the computer program
  • the invention relates to a vehicle which is arranged to be guided automatically along the virtual rail system.
  • Characteristics of a database compared and assigned the position When assigning characteristics and positions, a distance metric is often taken into account.
  • a method used in this context is
  • SLAM Simultaneous Localization and Mapping
  • Structures used for localization including among others Viewing angles and at other distances, and associated with these descriptors.
  • posts, trees or parts thereof, building parts, walls, corners, etc. are called.
  • These structures are usually in the far field of the vehicle. Often these structures are not time stable or can not under certain lighting conditions
  • Structures are present in the environment, for example, because the environment is poor in structure and the far field, for example due to a low
  • the automatically moving vehicles should often move in predeterminable ways. For example, automatically driving
  • a method for automatically guiding a vehicle along a virtual rail system is proposed.
  • vehicle includes here in addition to motor vehicles and commercial vehicles, transport equipment, self-propelled mobile robots, trucks, to aircraft moving near the ground, such. Drones or landed planes.
  • the features are obtained from a sensor signal, wherein the sensor signal, for example, a
  • the sensor signal may be formed from signals of a one-dimensional image capture unit when the vehicle is moving in the second direction.
  • the feature is an intermediate stage extracted from the signal, which can be used to characterize the position on the ground. For this purpose, a local section of the signal assigned to the position is considered. For example, a Convolution or filtering of the slice with one or more wavelets resulting in an N-dimensional vector for the feature.
  • the features are formed at each position in the same manner.
  • the subsurface may be any type of artificial or natural subsoil having distinct characteristics that remain at least partially unchanged over a period of time.
  • the method can be applied to a random pattern ground. Typical random patterns provide sufficient variation in their surface texture, brightness or
  • Suitable soil types include:
  • Industrial robots and transport robots are relevant, especially the latter type of surface, namely lawns, meadows or green areas for robotic lawnmowers of great importance. It should be noted that the plants growing on the ground, such. B. grass, in he recording the
  • Characteristics of the subsoil are also recorded. If the characteristics are recorded over all layers of the subsurface together with the plants, the recorded characteristics of the subsurface change through the growth of the plants within a relevant period of time, for example between two
  • Mowing cycles of the robotic lawnmower considerably. It may be provided to detect the characteristics of the ground only for predetermined sections of the ground, in particular only for certain layers of the ground. In connection with lawns, meadows and
  • Green areas may advantageously be the turf, hence the top one
  • Soil layer on which the plants grow, can be used to record the characteristics.
  • soil structures, small stones and / or dead plants can serve as characteristics.
  • the characteristics of the ground are converted into at least one working signature.
  • Signatures are codes of the features or the
  • the signatures can be electronically stored and processed.
  • the conversion of the features into signals typically results in a loss of information due to the coding.
  • the part of the information that is unnecessary for the characterization of the position is discarded.
  • the signature may be formed as a concatenation of binary coded numbers representing the vector for the feature in quantized form.
  • the conversion of the features into the signatures may be weighted, wherein the weighting may be performed by a user or, for example, by a neural network in the sense of a training.
  • a plurality of work signatures can be formed from one feature or a signature can be formed from a plurality of features or multiple signatures can be formed from a plurality of features.
  • the virtual rail system is an area between at least one
  • the vehicle can move in both directions on the given virtual rails.
  • the virtual rails depict the transport routes for the transport robot.
  • the virtual rails trace the tracks along which the robotic lawnmower mows.
  • these tracks run serpentine and take into account the working width of the mower.
  • the virtual rail system is stored as a record of the positions of the ground on which the vehicle is to move. These positions on the virtual rail system are assigned reference signatures.
  • the reference signatures thus form a map of the virtual rail system.
  • a reference signature can be associated with exactly one position on the virtual rail system, which can be determined unambiguously from the one reference signature.
  • a reference signature may have multiple
  • the position can then not be determined from just one reference signature, but a multiplicity of reference signatures are necessary, which are assigned to the same position or adjacent positions.
  • the assignment of the reference signatures and the positions is stored in a correspondence table which is designed as a lookup table.
  • the positions on the rail system are assigned addresses within the correspondence table.
  • the signature is considered a number indicating an address of the correspondence table, and therefore serves to determine the address in the correspondence table. On a storage of the signature itself can therefore be dispensed with. If the following is spoken in connection with the correspondence table of storing or deleting reference signatures, this simplifying formulation can be understood as meaning that a reference position belonging to the signature is stored or deleted.
  • the signature does not need to be unique, so multiple locations on the virtual track system may have the same signature and therefore reference the same address in the correspondence table. Therefore, it is advantageously provided that a plurality of positions per reference signature can be stored in the correspondence table. It should be noted here that the area covered by the virtual rail system, hence its length and width, has an influence on the probability of such a multiple occurrence of the signature in the correspondence table.
  • the correspondence table is, as explained above, advantageously designed so that multiple positions per table field can be stored. In this case, a storage capacity can be permanently assigned per table field or the total available storage can be flexibly divided among the table fields, eg. B. by means of dynamic lists.
  • the virtual rail system may preferably be divided into several sections. Preferably, then each section of the virtual rail system
  • Rail system can be assigned its own correspondence table, in which case the different correspondence tables are advantageously compatible with each other by having identical address ranges and require the same type of signature formation.
  • An electronic control unit of the vehicle can advantageously be referred to the part of the correspondence table associated with the current section of the virtual rail system on which the vehicle is moving, and to parts of the vehicle
  • the reference signatures can be obtained in the same way as explained for the working signatures.
  • the reference signatures form a map of the virtual rail system.
  • one or more of the following methods can be chosen:
  • the vehicle can, before it is automatically guided, complete a "training run".
  • the vehicle is controlled or guided by a user or by another vehicle that has already been trained.
  • the vehicle moves along the paths that later become the virtual
  • reference signatures for the virtual rail system can then preferably be stored in the above-mentioned correspondence table by the assigned positions in the
  • Correspondence table are stored. This allows the vehicle to be automatically guided in any new environment following the training.
  • the vehicle is a robotic lawnmower
  • the training run along the boundaries of the area to be mowed is performed.
  • the learning run leads along the edge of the lawn. It can be areas where you do not want to mow -.
  • B. Flower beds or paths - are omitted in the Anndfahrt or as later shown again specially marked. By this Anndfahrt can be dispensed with a perimeter wire, which conventionally serves to guide the robotic lawnmower.
  • the correspondence table and thus also the
  • Reference signatures are transmitted from at least one transmitter.
  • the vehicle can be automatically guided immediately in a new environment.
  • the transmission is wireless or wired and may be done directly between the transmitter and the vehicle's electronic control unit or passed through a server, i. in other words, from one
  • the correspondence table can be transmitted completely or only partially, with the parts of the correspondence table correlating with the achievable positions on the rail system.
  • the transmitter may for example be integrated in another vehicle. This is useful when the vehicle follows the other vehicle, for example in a convoy of motor vehicles or trucks on the road or as a transport column for several mobile transport devices.
  • the transmitter can be permanently stationed. In this case, in particular a plurality of radio beacons may be provided, each transmitting the part of the correspondence table corresponding to the portion of the virtual
  • Rail system is assigned, on which the vehicle can move within the transmitter radius of the transmitter.
  • the floor can also be equipped with a designed sensor, z. B. a ground scanner, regardless of the vehicle to be detected in advance.
  • the sensor system is advantageously set up to efficiently detect larger sections of the virtual rail system. Then, from the captured
  • Reference signatures are determined and stored in a central server. Finally, the reference signatures are preferably in the form of
  • safety distances can be maintained hereby.
  • collisions with people, objects, the infrastructure and / or each other can be avoided.
  • this planning can ensure compliance with traffic regulations.
  • Reference signatures of the virtual rail system This is particularly relevant for the case where the reference signature is assigned exactly one position. If the at least one working signature and the at least one reference signature of the virtual rail system agree, the position of the vehicle on the virtual rail system is closed. If a single work signature is compared with a single reference signature, the search is not for the best possible match, as is often done in the context of similarity or distance measures, but for a perfect match of the two signatures, ie identity. This results in the advantage that it is feasible to check for conformity with considerably less computational effort than to check for similarity or dissimilarity with the aid of similarity or distance dimensions.
  • Work signature is also considered as a number indicating the address of the
  • the signature preferably has a length between 8 bits and 32 bits, whereby a compromise between a too short signature, in which only between a few positions can be distinguished, and a too long signature, which leads to a large correspondence table, the large
  • too short signatures may be summarized by considering groups of fixed geometrical arrangement, such as, e.g. B. two signatures of equal length, at two staggered positions.
  • groups of fixed geometrical arrangement such as, e.g. B. two signatures of equal length, at two staggered positions.
  • the signatures do not have to be chosen so long that all positions on the virtual rail system or the current section of the virtual rail system are uniquely assigned, since, as already mentioned, the multiple occurrence of signatures is allowed.
  • Reference signatures are counted and the number of matches of the respective position of the vehicle on the virtual rail system
  • Reference signature is a vote for the position or positions assigned to the reference signature.
  • the highest number of matches between the work signatures and the reference signatures becomes at least one
  • Histogram of matches determined. Each match is assigned to a corresponding histogram bin, with the histogram bins in turn assigned to positions on the track system. Then the histogram bin is searched for, which is the most
  • Histogram bins are determined, which total the most
  • a histogram can be created for each correspondence table or for each part of the correspondence table as described above and finally the histogram bin can be determined, which has the most matches across all histograms .
  • multiple histograms of differing spatial resolution may be used for a correspondence table or for a portion of the correspondence table, respectively, and when a histogram histogram is found for a low resolution histogram, the next step is a histogram for the corresponding surface / length with a higher one Resolution is used.
  • the histograms can be one-dimensional and / or two-dimensional. For example, in a two-step approach, a first histogram is one-dimensional and has a resolution of lm per histogram bin, and a second histogram is two-dimensional and has one
  • the following steps can be carried out: If a position tracking is carried out, In the case of the known starting position, the respectively next position is determined - also called "tracking" the reference signatures for the comparison in the determination of the next position can be restricted to those reference signatures which are within the search field, which is determined by a search area Position or around a search area around the next position. By restricting the reference signatures to those that are within the search field, the
  • Computing and / or storage capacity can be reduced because not the complete virtual rail system must be considered.
  • the reference signatures can be updated, at least in part, with the aid of the work signatures.
  • This update preferably takes place already when a majority of the other reference signatures point to a position. Particularly preferably, the update takes place permanently during operation.
  • the reference signatures are not replaced during the update, but deposited multiple times. As a result, different short-term conditions, such. As a dry state and a wet state of the ground, be taken into account.
  • additional information on each reference signature may be provided which is suitable for detecting obsolete reference signatures.
  • a counter per position stored in the correspondence table can be provided as additional information, which is increased if the work signature matches the reference signature at the determined position and the determined position coincides with the stored position with sufficient accuracy.
  • those positions assigned to the reference signatures whose counter is below a threshold can be deleted.
  • the counters can be reset.
  • multiple reference signatures pointing to the same position can be reduced or summarized.
  • the counters can be in the Correspondence table are stored. Everybody in the
  • Correspondence table stored position to be assigned a counter.
  • control signals for the vehicle are provided with which the movement of the vehicle is controlled.
  • the drive and / or the steering of the vehicle can be controlled with the control signals. This makes it possible to move the vehicle so that it is guided on the virtual rail system.
  • Rail system provided, which control a steering of the vehicle, and control signals for a longitudinal direction of the virtual rail system
  • control signals for the vehicle are provided with which at least one implement of the vehicle is controlled.
  • the implement of the vehicle can be controlled in predetermined positions in the desired manner and perform or suspend the intended work.
  • a working device is to serve a mower of a robotic lawnmower.
  • the control signals z. B. the speed or the power of the mower can be controlled.
  • Optional additional features can be stored in the correspondence table. These additional features include:
  • Information about storing the reference signatures or the positions assigned to them such as: Date / time, dry or wet condition, (day) light or darkness, other environmental conditions.
  • Direction sensors are controlled so that the illumination or the illumination color illuminates the illuminated area from a predetermined direction, regardless of which orientation the vehicle currently has with respect to its surroundings. Due to the orientation, the ground is always picked up in the same way. Due to the defined lighting direction is also the image content of the of the
  • Illuminated device image largely independent of the orientation of the vehicle, since the image can be aligned by means of the direction sensor.
  • the computer program is set up to perform each step of the method, in particular when it is performed on a computing device or controller. It allows the implementation of the method in a conventional electronic control unit without having to make any structural changes. For this it is on the machine-readable
  • the electronic control unit is obtained, which is adapted to automatically guide the vehicle along the virtual rail system.
  • an FPGA Field Programmable Gate Array
  • an ASIC Application Specific Integrated Circuit
  • a vehicle which has a detection device for detecting features of the ground and which is set up with the method explained above along the virtual path
  • the vehicle may have the electronic control unit described above.
  • the vehicle can be an industrial robot that moves primarily within industrial plants and carries out activities autonomously there.
  • the vehicle may also be a transport robot that autonomously transports goods in predeterminable ways that correspond to the virtual rail system.
  • the vehicle may be a robotic lawnmower that autonomously mows lawns, meadows or green spaces.
  • the invention is not limited to the examples mentioned.
  • the detection device has an optical
  • Image capture device in particular a camera or a camera system, which takes a picture of the ground.
  • the features can then be captured directly from the captured image.
  • the optical image sensing device can be used for any type of ground in which the features are optically distinguishable. Examples of this are listed above.
  • the detection area is directed vertically downwards on the ground.
  • the vehicle has a lighting device that is associated with the image capture device and illuminates the area of the background that is detected by the image capture device.
  • the lighting device with multiple light sources
  • the substrate is then illuminated with different colors from different directions, so that the features can be better recognized.
  • the substrate is then illuminated with different colors from different directions, so that the features can be better recognized.
  • Lighting device operated pulsed to avoid motion blur in the image.
  • the pulse duration of the illumination device and the acquisition time of the image acquisition device can be synchronized.
  • the optical image acquisition device offers the advantage that the focus allows certain layers of the background to be selected for viewing.
  • the optical image acquisition device offers the advantage that the focus allows certain layers of the background to be selected for viewing.
  • Image capture device uses a camera or a camera system with low depth of focus range, wherein the sharply imaged layer is located approximately at the level at which the features are to be detected. Disturbing objects outside the depth-of-field are blurred and are therefore not included as features. Such disturbing objects are
  • the detection device comprises at least one tactile sensor, which at each position the height and the nature, in particular the compliance, of the solid ground, of deposits on the
  • the tactile sensor can be designed, for example, in the form of a measuring finger. A pen is moved up and down several times per second and hits with little force on the solid ground. Depending on the height and the nature of the hub change and possibly other sizes, such. B. damping,
  • the tactile sensor may be formed, for example in the form of a measuring wheel with spring. The measuring wheel rolls with little force over the solid ground. Depending on the height and condition of the ground, the measuring wheel is pressed against the spring.
  • a plurality of tactile sensors may be arranged in a row perpendicular to the direction of movement of the vehicle across the width of the track.
  • the tactile sensor is suitable for uneven surfaces, in which the height and / or the condition depends on the position in the order of magnitude of the relevant for the formation of the signatures area (stencil or
  • Sub template change. Turf, meadows and green areas as well as fields and fields are examples of this. Since the tactile sensor detects solid ground, the height of the turf is measured and plants growing on it are not perceived.
  • the detection device has at least one
  • Air pulse sensor or an air retention sensor also called ground effect sensor on. From an opening, air (or other gas) is directed towards the Underground, preferably vertically down, ejected. This can be both impulsive and continuous. The air escapes then depending on the height and the condition of the underground, of deposits on the underground and the like. Depending on this arises a counterforce, which is measured by the air pulse sensor or air retention sensor by means of a drag sensor and from the height and nature of the substrate, the deposits on the ground and the like of the subsurface are determined.
  • a plurality of air pulse sensors or air accumulation sensors may be arranged in a row perpendicular to the direction of movement across the width of the track.
  • the air pulse sensor or air retention sensor is suitable for uneven surfaces in which the height and / or the condition change depending on the position in the order of magnitude of the area relevant for the formation of the signatures (template or sub template). Turf, meadows and green areas as well as fields and fields are examples of this. In lawns, meadows and green areas, the opening from which air is expelled in the direction of the ground is preferably arranged at a height such that the air flows directly onto the ground
  • the detection device may alternatively or additionally comprise further sensors.
  • the sensors described below also measure the height of the subsoil and create a depth image of the subsoil.
  • sound sensors in particular ultrasonic sensors
  • electromagnetic sensors may be provided, such.
  • an ultra-broadband sensor or a radar sensor As an ultra-broadband sensor or a radar sensor. The latter can have a penetration depth in the centimeter range in the underground.
  • an imaging method z. B. based on a sensor array or a sensor array can be used.
  • the sensors mentioned are particularly suitable for uneven ground, especially for lawns, meadows and green areas. The sound or electromagnetic waves penetrate the plants and a depth image of the sod can be recorded.
  • the vehicle may include a direction sensor by which the reference signals and the working signals may be aligned. This leads to advantages in the self-localization with respect to the Consistency of the work signatures and the reference signatures. Furthermore, in the context of intersecting virtual rails, that which is suitable for traveling in the desired direction can be selected.
  • the direction sensor can be used to control the lighting device such that the lighting or the illumination color illuminates the illuminated surface from a predetermined direction, regardless of which orientation the vehicle currently has with respect to its surroundings.
  • the virtual rail system in addition to at least one virtual rail have different virtual components, such. As branches, switches, crossings, tees, parking positions, evasive positions, etc. This makes the vehicle on the virtual rail system similar to a real rail system, such. B. for trains, out.
  • the virtual rail system is in itself not visible or distinguishable from the environment, at least for humans. It can be provided that visible markings are applied to the real locations of the virtual rail system. The visible markings make it clear to people that there is a virtual rail system in this area and accordingly self-driving vehicles can be expected. In the case of self-driving motor vehicles, this may signal that the driver can switch to automated driving. There are many ways to realize such visible marks. Examples include the application of color dots, a sprinkling of color chips, stripes with a different pattern or other color, especially in carpets, among many more
  • Figure 1 shows a cross-sectional view of a vehicle according to a
  • Figure 2 shows an oblique view of a vehicle according to another embodiment of the invention.
  • FIG. 3 shows a view from below of the vehicle according to FIG. 1 on a virtual rail.
  • Figure 4 shows a bottom view of a vehicle according to another embodiment of the invention, which is designed as a robotic lawnmower.
  • FIG. 5 shows a schematic representation of a tactile sensor in the form of a measuring finger.
  • FIG. 6 shows a schematic representation of a tactile sensor in the form of a measuring wheel.
  • FIG. 7 shows a schematic representation of an arrangement of a plurality of tactile sensors from FIG. 5.
  • FIG. 8 shows a schematic representation of an air storage sensor.
  • FIG. 9 shows a schematic representation of an arrangement of several air storage sensors from FIG. 8.
  • Figure 10 shows a schematic cross-sectional view of an optical
  • FIG. 11 shows a schematic representation of recorded sensor signal measurement points, feature formation and work signatures according to an embodiment of the invention.
  • FIG. 12 shows a schematic representation of the virtual rail system, a plurality of detection areas and a common feature formation according to an embodiment of the invention.
  • FIG. 13 shows a correspondence table according to an embodiment of the invention.
  • Figure 14 shows a schematic representation of working signatures
  • FIGS. 1 to 3 show different views of vehicles 1 according to two embodiments of the invention.
  • the vehicle 1 can be used, for example, as an industrial robot or as a transport robot.
  • the vehicle 1 moves on a ground 2 along a virtual rail system 3 (not shown in FIG. 1).
  • the virtual rail system 3 provides z.
  • the vehicles 1 each have an image capture device 4, z. As a camera, and a lighting device 5, which are connected to an electronic control unit 20.
  • the image capture device 4 may include one or more of the following sensors: a monocular image sensor;
  • a one-dimensional line sensor which is arranged transversely to the direction of movement of the vehicle 1 or a detection transversely to the
  • the second dimension is obtained by the movement of the vehicle 1;
  • a conventional image sensor that provides a two-dimensional sensor signal, either in grayscale or color image; This is ideally operated with short exposure time and small f-number (ie a large aperture), so that at the same time a low
  • Luminous efficacy can be achieved
  • a distance measuring sensor e.g. based on ultrasound, radar or time of flight measurement or a stereo camera or a
  • an orientation measuring sensor e.g. Based on a
  • the image capture device 4 has a detection area 6 for features (not shown in detail) of the substrate 2.
  • Image capture device 4 and the sensor exist different
  • the image capture device 4 is arranged centrally on the non-steered axle.
  • the image capture device 4 is arranged near the center of the vehicle 1.
  • FIG. 1 shows an embodiment of the invention in which the
  • Image capture device 4 is arranged below the vehicle 1.
  • the image capture device 4 is offset in the direction of the interior of the vehicle 1, on the one hand to achieve a larger detection area 6 and on the other hand to protect the image capture device 4 from dirt, abrasion and so on. Furthermore, a different position of the sun and / or rain have no direct influence on the image acquisition or the
  • the illumination device 5 is arranged in an annular manner around the image capture device 4 and illuminates at least the capture region 6.
  • the vehicle 1 comprises a direction sensor 9, which is also connected to the electronic control device 20, with which the orientation of the vehicle 1 with respect to the ground 2 can be determined.
  • the illumination direction of the lighting device 5 is shown in FIG.
  • FIG. 2 shows a further embodiment of the invention, in which the image capture device 4 is arranged on the front side of the vehicle 1 in the direction of travel.
  • the image capture device 4 can additionally be used for collision avoidance, wherein the detection range 7 for collision avoidance is designed to be larger than the detection region 6 for the features of the background 2.
  • the illumination device 5 is also arranged on the front of the vehicle 1 and illuminates at least the
  • FIG. 3 shows a view from below of the vehicle from FIG. 1, in which the image capture device 4 is arranged centrally below the vehicle 1, the vehicle 1 moving on the virtual rail system 3.
  • the lighting device 5 comprises a plurality of lighting modules 8, the light from different directions with different colors (here four different colors). This makes it possible to distinguish features even in slightly textured but structured surfaces.
  • the illumination device 5 is rotatable, either mechanically rotatable or electronically rotatable by using multicolor light modules 8. By using the direction sensor 9 to control the rotation of the lighting device 5 ensures that the illumination direction for each color is independent of the current orientation of the Vehicle with respect to its environment or with respect to the orientation of the virtual rail system 3 at the position of the vehicle. 1
  • the lighting device 5 can be operated pulsed in order to avoid motion blur in the image.
  • Image capture device 4 synchronized. It is also shown in FIG. 3 that the virtual rail system 3 has a switch 11 in addition to at least one virtual rail 10. Likewise are not shown here in detail
  • the virtual rail system 3 can be made visible to humans by marking the ground 2 by means of different colors and / or shapes.
  • FIG. 4 shows a bottom view of a vehicle 1 according to another embodiment of the invention, in which the vehicle 1 is designed as a robotic lawnmower.
  • the vehicle 1 has a working device in the form of a mower 21, which is arranged on the underside of the vehicle 1.
  • FIG. 4 additionally provides a sensor device 22, which differs from the image capture device 4 described in FIGS. 1 to 3. As shown in FIG. 10 and described in connection therewith, an optical image capturing device 4 as shown in FIGS. 1 to 3 can also be used for the robotic lawnmower.
  • FIG. 5 shows a first embodiment of a tactile sensor 23, which is designed in the form of a measuring finger.
  • a pin 24 is disposed in a housing 25 and stands out from this.
  • a drive device not shown, is provided which moves the pin 24 up and down several times per second. In this case, one end of the pin 24 hits with little force on the solid surface 2.
  • FIG. 6 shows a second embodiment of a tactile sensor 26 in the form of a measuring wheel.
  • the measuring wheel 27 rolls over the solid ground 2 with little force.
  • a spring 29 is pressed via a suspension rod 28.
  • the deflection of the suspension rod 28 on the spring 29 is closed to the height of the substrate 2, for example by means of a spring travel sensor.
  • FIG. 7 shows an arrangement of several tactile sensors 23 according to the first embodiment of FIG. 5 for use with turf as ground 2.
  • Grass plants 30 grow on the turf 31, which represents the transition to the solid soil, with the roots of the grass plants 30 in the ground Sump 31 are located.
  • the growth of these grass plants 30 changes the characteristics in a short time, especially between two mowing cycles. Therefore, the grass plants 30 (generally all plants) should not be perceived in the detection of the characteristics.
  • the tactile sensors 23 are arranged in a row perpendicular to the direction of movement of the vehicle 1 in such a way that the pin 24 reaches the solid soil or the turf 31. There are therefore only features of the soil, deposits such. As stones, and quasi-permanent parts of grass plants (generally all plants), such. B. roots detected.
  • the movement of the vehicle 1 expands the measurement of the one-dimensional sensor device 22 into a two-dimensional measurement.
  • the arrangement may also be transferred to tactile sensors 26 according to the second embodiment.
  • a plurality of measuring wheels 27 are then arranged in a row perpendicular to the direction of movement of the vehicle 1 and roll in the direction of movement over the turf 31.
  • FIG. 8 shows an embodiment of an air retention sensor 32, which can also be used in the sensor device 22.
  • air is ejected vertically downward in the direction of the ground 2 (see Figure 9). This can be both impulsive and continuous. The air then escapes depending on the height and nature of the subsurface 2, deposits such. As stones, on the ground 2 and the like. Depending on this creates a counterforce, which is measured by means of a drag sensor 34 and from which the characteristics of the substrate 2 are determined.
  • FIG. 9 shows an arrangement of several air storage sensors 32 according to FIG. 9
  • Embodiment of Figure 8 for use in lawn as a base 2.
  • the air storage sensors 32 are arranged in a row perpendicular to the direction of movement of the vehicle 1.
  • the opening 33 is ejected from the air in the direction of the substrate 2, disposed at a height which is within the plane of the plants 30, so that the air strikes the turf 31 directly.
  • the sensor device 22 in further detail
  • Embodiments have further sensors.
  • sound sensors in particular ultrasonic sensors
  • electromagnetic sensors may be provided, such.
  • an ultra-broadband sensor or a radar sensor As an ultra-broadband sensor or a radar sensor. These can also be arranged in a row perpendicular to the direction of movement of the vehicle 1 or in the form of an array. The sound or electromagnetic waves penetrate the grass plants 30 and a depth image of the sod 31 can be recorded.
  • Image capture device 4 according to the first embodiment of the vehicle 1 of Figure 1 and 3 when used as lawn mower robots, meadows or green spaces. Since the growth of these grass plants 30 changes the characteristics in a short time, especially between two mowing cycles, the grass plants 30 (generally all plants) should not be perceived in the detection of the characteristics.
  • the detection region 6 of which lies vertically downwards on the substrate 2 (cf., FIG. 1), and also in the case of the optical image-capturing device 4 arranged on the front side of the vehicle 1 (see FIG 2) can be a camera with a low depth of field be used. The optical image capturing device 4 then focuses on a narrow area 33 around the turf 31. As a result, the grass plants 30 are partially not perceived.
  • the lighting device 5 already described can be used.
  • FIG. 11 shows how features are obtained from a two-dimensional sensor signal of the image capture device 4 and signatures are formed.
  • Signal measuring points 12 represent those of the
  • Image capture device 4 detected measurements, ie z. As gray values, distance values or height values, or measurement vectors, ie z. B.
  • the signal measurement points 12 form a strip shown in FIG. 11, which represents a section of the virtual rail system 3.
  • the vehicle 1, not shown here should move in the direction of arrow 13 from bottom to top. Although the steps explained below are performed at each position of the vehicle 1, they are shown separately from each other. To be able to follow the sequence of steps, the figure 11 will be read from bottom to top.
  • the signal measuring points 12 are usually not recorded at the same time, but chronologically with the movement of the vehicle 1.
  • the horizontal distance and the vertical distance between the signal measuring points 12 are shown in this embodiment as approximately equal, but they can in other Embodiments also be different.
  • the width of a track 14 detected by the image capturing device 4 may be limited by the width of the vehicle 1 or the width of the image capturing device 4 or the extent of the feature capturing area 6.
  • the track 14 in this embodiment comprises 25 signal measurement points 12.
  • the number of signal measurement points 12 is significantly higher.
  • a signal measuring point 12 corresponds to a pixel.
  • a template 15 is shown within which features are obtained from the signal measurement points 12.
  • the template 15 extends over a two-dimensional surface comprising a plurality of signal measurement points 12.
  • This template 15 does not have to cover the entire width of the track 14. In this embodiment, it covers almost half of the track 14, could also cover more or less in other embodiments.
  • the template 15 here has an octagonal shape, but other shapes are possible, such. Circle, ellipse, oval, square, rectangle, polygon, or line.
  • sub-templates 16 shown here as 37 circles. Again, other shapes and / or other numbers are possible. These sub-templates 16 are not arranged overlapping here in the template 15 and fill them largely. In other embodiments, the sub-templates 16
  • a sub-template measured value or vector is formed from the values and / or the vectors of the signal measurement points 12 which are at least partially covered by the respective sub-template 16 or are located in their surroundings.
  • a weighted average over the values and / or the vectors is formed from the four closest signal measurement points 12, the weights being e.g. be selected as a function of the distance between the signal measuring points 12 and the center of the lower template 16 and the sum of the weights is 1.
  • This step can also be understood as an interpolation step.
  • An invalid value / vector is also provided for the sub-template 16, which is used in the case of a defective image capture unit 4, in particular in the case of defective pixels or a non-plausible sensor signal or where a measured value / vector is missing because the template 15 is out of the track 14 protrudes.
  • the weight is set to zero so that the invalid value is not taken into account.
  • a preprocessing of the sub-template measured values / vectors is carried out in this exemplary embodiment, eg normalization or "balancing" of the data geometrically (for example, with a homography image), so that it then a front-parallel projective image of the background 2 equivalent.
  • an optionally existing misalignment of the vehicle with respect to the ground 2 is compensated.
  • the distance of the image capture unit 4 to the background 2 is variable and a non-telecentric optic is used, it may be advantageous to scale the image of signal measurement points 12 or template 15 to compensate for the difference in distance and thus ensure that the signatures formed do not depend on the distance.
  • each type of level sensors can be used to determine the distance.
  • a feature is formed from the preprocessed sub-template measurement values / vectors that is to be characteristic of the position.
  • This feature may be, for example, a vector of numbers, where each number represents a sub-feature, e.g. B. result of convolution or filtering of a signal excerpt with a wavelet. In doing so, in particular
  • a convolution of the image section of gray values with a wavelet that performs a smoothed second derivative in a predetermined direction (depending on the direction sensor 9 or independently thereof with fixed reference to the vehicle coordinate system).
  • the formation of the features is (apart from the preprocessing) spatially invariant, d. H. it can be done in the same way at any position.
  • this may be a 16-bit wide signature E, L, J, G or D, where the 16 Binary values can be calculated individually, eg by 16 different weighted links of the 37 sub-template values / vectors, each followed by a threshold decision. There are 2 16 ways to express different positions on the signatures.
  • the determination of the weighted links and the associated threshold values is undertaken, for example, by a user or by a neural network on the basis of training data and can also be adapted automatically during operation in order to adapt to different substrates 2.
  • Each signature E, L, J, G or D is assigned to a position, e.g. the center of the template 15a-e, from which the signature was formed. Since 5 templates 15a-e are evaluated here in parallel, the result is 5 signatures E, L, J, G and D, which are assigned to 5 adjacent positions.
  • the position can be described with coordinates, a coordinate x across the virtual rail system 3, which has positive and negative signs and in which the value zero corresponds to the center of the virtual rail system 3, and a coordinate s, which runs along the virtual rail system 3 and starting at the starting point at zero. It should be noted here that the virtual rail system 3 may be curved.
  • Coordinates x and s can be different and can be formed metrically or as a function of the image capture device 4 (eg pixels), or in further embodiments as a function of the sensor device 22.
  • This formation of the signatures E, L, J, G and D is always carried out again after the vehicle 1 has moved a bit further. This results in a strip 17 of due to the random background generally different signatures, which are designated here only with S.
  • the area 18 includes signatures that are considered together to determine a location. In the figure 6 will be discussed in more detail on these different signatures.
  • Overlap region of the two detection areas 6a and 6b is located.
  • the signal measurement points 12 from both measurements are used to form the features.
  • the signatures formed from this then have a redundancy that can be meaningful if the features change depending on the position of the vehicle 1 (even if it is actually assumed that this is not the case).
  • FIG. 13 shows a correspondence table.
  • Reference signatures A - O which were created as described in connection with FIG. 11 during a learning run of the vehicle 1, are assigned corresponding positions. This assignment between the reference signature A - O and the position is entered in the correspondence table.
  • the same position in the formation of the signature A - O leads to the same location (x, s).
  • Each position (x, s) thus formed corresponds to a position on the virtual rail system.
  • the signature is used only for the one-time determination of the address in the correspondence table.
  • the position is saved at this address.
  • the signature itself is therefore not saved.
  • the signatures can occur multiple times, with the probability that a signature occurs several times with the distance stored (in the correspondence table) and with the width of the strip 17 increasing.
  • the correspondence table stores several items per table field. In this case, an amount of memory can be assigned permanently per table field or, in other embodiments, the available memory can be flexibly distributed among the table fields - e.g. B. by means of dynamic lists.
  • Information about storing the reference signatures A - O such as: Date / time, dry or wet condition, (day) light or darkness, other environmental conditions.
  • FIG. 14 illustrates the procedure for the localization according to an exemplary embodiment of the method according to the invention on the basis of a schematic illustration.
  • the reference signatures A - O have already been determined by the learning run and shown in FIG. 14
  • the reference symbols A-0 represent here a total of 15 different signatures. In the example, 100 reference signatures have been detected; in practice, significantly more reference signatures are generally detected. Since the number of different reference signatures A-0 exceeds the power of the number of possible signatures, a certain reference signature A-0 generally occurs multiple times.
  • Work signatures A * -0 * considered, in particular all 15 work signatures A * -0 * .
  • FIG. 14 For the sake of clarity, not all connecting lines in FIG. 14 are to be drawn for these further working signatures A * -0 * , but only the correct connecting lines.
  • These correspondences thus emphasized are distinguished from all other possibilities in that they mutually confirm each other. If errors are detected in this assignment, it is provided to update the reference signatures A-0 by means of the work signatures A * -0 * .
  • the position of the vehicle 1 on the virtual rail system 3 is determined.
  • the vehicle 1 is thus currently two units to the left of the center of the virtual
  • Rail system 3 offset at the line coordinate 75.
  • the group 19 * of 9 working signatures A * -0 * marked with a square stands out here.
  • all 9 work signatures A * -0 * point to a group 19 of reference signatures A-0 with adjacent positions. There are no similar matches for any of the other positions.
  • Matches can be determined using multi-level histograms. In the following, this will be briefly described:
  • a first histogram is one-dimensional and has a resolution of 1 m per histogram bin, ie a coarse resolution.
  • a resolution of 1 m per histogram bin ie a coarse resolution.
  • all distance positions S F of the vehicle are entered, resulting in the formation of possible correspondences.
  • Each one will Match in the corresponding histogram bin
  • the group of adjacent histogram bins that have the most matches can also be determined.
  • the position along the coordinate S A is already known to about 1 m exactly.
  • the position is determined precisely.
  • a two-dimensional second histogram having a resolution of 1 cm ⁇ 1 cm is used. In the second vote, only the correspondences participate, which in the first histogram
  • the second histogram again retrieves the histogram bin or a set of histogram bins with the most matches.
  • the group of histogram bins may additionally be averaged or weighted.
  • the position corresponding to this histogram bin (s) corresponds to the position of the vehicle 1.
  • Position tracking Searches for the next positions based on the position already found. Accordingly, the reference signatures A-0 are restricted to a search field resulting from a search range around the detected position or a search range around the next position.
  • Control in the transverse direction will be the already known indication of the lateral Position XF used.
  • the steering is done so that this lateral position X F is reduced. This is a task known to the skilled person from the
  • Embodiment moves the vehicle 1 in the direction of increasing values for the route position SF.
  • the knowledge of the direction of travel is at the
  • Position tracking used to predict the next position and specify a search field.
  • the vehicle can also drive in the opposite direction. If the vehicle turns for this, it means in

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel, selon lequel des caractéristiques d'un sol d'infrastructure, sur lequel le véhicule se déplace ou sera déplacé, sont détectées et converties en au moins une signature de travail (A*-O*), il est vérifié si ladite au moins une signature de travail (A*-O*) correspond à au moins une signature de référence (A-O) du système de rails virtuel, une position sur le système de rails virtuel étant associée à ladite au moins une signature de référence (A-O) et, si ladite au moins une signature de travail (A*-O*) et ladite au moins une signature de référence (A-O) correspondent, la position du véhicule sur le système de rails virtuel peut être déduite.
EP18773967.7A 2017-11-14 2018-09-14 Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel Pending EP3710781A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017220291.2A DE102017220291A1 (de) 2017-11-14 2017-11-14 Verfahren zur automatischen Führung eines Fahrzeugs entlang eines virtuellen Schienensystems
PCT/EP2018/074865 WO2019096463A1 (fr) 2017-11-14 2018-09-14 Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel

Publications (1)

Publication Number Publication Date
EP3710781A1 true EP3710781A1 (fr) 2020-09-23

Family

ID=63683154

Family Applications (2)

Application Number Title Priority Date Filing Date
EP18773967.7A Pending EP3710781A1 (fr) 2017-11-14 2018-09-14 Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel
EP18204780.3A Pending EP3482622A1 (fr) 2017-11-14 2018-11-07 Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP18204780.3A Pending EP3482622A1 (fr) 2017-11-14 2018-11-07 Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel

Country Status (4)

Country Link
EP (2) EP3710781A1 (fr)
CN (2) CN111602028A (fr)
DE (1) DE102017220291A1 (fr)
WO (1) WO2019096463A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580049A (zh) * 2019-10-30 2019-12-17 华强方特(深圳)科技有限公司 一种无轨游览车的循迹控制方法
DE102020213151A1 (de) 2020-10-19 2022-04-21 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren und Vorrichtung zum Kartieren einer Einsatzumgebung für zumindest eine mobile Einheit sowie zur Lokalisation zumindest einer mobilen Einheit in einer Einsatzumgebung und Lokalisationssystem für eine Einsatzumgebung
DE102020214002B3 (de) * 2020-11-08 2022-04-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Vorrichtung und Verfahren zur Bestimmung einer Position einer Erfassungseinheit und Verfahren zur Hinterlegung von Extraktionsinformationen in einer Datenbank
DE102022203276A1 (de) 2022-04-01 2023-10-05 Robert Bosch Gesellschaft mit beschränkter Haftung Automated Valet Parking mit schienengestütztem Sensorsystem

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999866A (en) * 1996-11-05 1999-12-07 Carnegie Mellon University Infrastructure independent position determining system
DE102008034606A1 (de) * 2008-07-25 2010-01-28 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur Darstellung der Umgebung eines Fahrzeugs auf einer mobilen Einheit
US20130054129A1 (en) * 2011-08-26 2013-02-28 INRO Technologies Limited Method and apparatus for using unique landmarks to locate industrial vehicles at start-up
DE102011085325A1 (de) * 2011-10-27 2013-05-02 Robert Bosch Gmbh Verfahren zum Führen eines Fahrzeugs und Fahrerassistenzsystem
DE102012207269A1 (de) * 2012-05-02 2013-11-07 Kuka Laboratories Gmbh Fahrerloses Transportfahrzeug und Verfahren zum Betreiben eines fahrerlosen Transportfahrzeugs
DE102013202075A1 (de) * 2013-02-08 2014-08-14 Robert Bosch Gmbh Bewegungsstrategieerarbeitungs- und/oder Navigationsvorrichtung
DE102013003117A1 (de) * 2013-02-25 2013-08-29 Daimler Ag Verfahren zur Selbstlokalisation eines Fahrzeugs und zur Detektion von Objekten in einer Umgebung des Fahrzeugs
DE102013207899A1 (de) * 2013-04-30 2014-10-30 Kuka Laboratories Gmbh Fahrerloses Transportfahrzeug, System mit einem Rechner und einem fahrerlosen Transportfahrzeug, Verfahren zum Planen einer virtuellen Spur und Verfahren zum Betreiben eines fahrerlosen Transportfahrzeugs
US20150149085A1 (en) * 2013-11-27 2015-05-28 Invensense, Inc. Method and system for automatically generating location signatures for positioning using inertial sensors
EP3082537B1 (fr) * 2013-12-19 2020-11-18 Aktiebolaget Electrolux Dispositif de nettoyage robotisé et procédé de reconnaissance de point de repère
DE102015004923A1 (de) * 2015-04-17 2015-12-03 Daimler Ag Verfahren zur Selbstlokalisation eines Fahrzeugs
US10366289B2 (en) * 2016-03-15 2019-07-30 Solfice Research, Inc. Systems and methods for providing vehicle cognition

Also Published As

Publication number Publication date
CN111602028A (zh) 2020-08-28
CN109782752A (zh) 2019-05-21
EP3482622A1 (fr) 2019-05-15
DE102017220291A1 (de) 2019-05-16
WO2019096463A1 (fr) 2019-05-23

Similar Documents

Publication Publication Date Title
EP3710781A1 (fr) Procédé de guidage automatique d'un véhicule le long d'un système de rails virtuel
EP3234715B1 (fr) Procédé de cartographie d'une surface à traiter pour véhicules robots autonomes
EP2752726B1 (fr) Machine de traitement de surface et procédé de traitement associé
WO2013087052A1 (fr) Procédé et dispositif de détermination sans contact des paramètres de plantes et de traitement de ces informations
DE102016101552A1 (de) Verfahren zum Erstellen einer Umgebungskarte für ein selbsttätig verfahrbares Bearbeitungsgerät
EP3416019B1 (fr) Système pourvu d'au moins deux appareils de traitement du sol automatique
EP2659322A1 (fr) Procédé de traitement d'une surface au moyen d'un véhicule robotisé
DE102019206036A1 (de) Verfahren und Vorrichtung zur Bestimmung der geografischen Position und Orientierung eines Fahrzeugs
EP3434090A1 (fr) Système de détermination de données d'environnement tridimensionnelles, en particulier destiné à l'entretien des cultures et module capteur
DE102012112036B4 (de) Selbstfahrendes Bodenbearbeitungsgerät und Verfahren zur Navigation bei einem selbstfahrenden Bodenbearbeitungsgerät
DE102021115630A1 (de) Verfahren zum Betreiben einer mobilen Vorrichtung
DE102015222390A1 (de) Autonomes Arbeitsgerät
DE102011078292A1 (de) Verfahren und Vorrichtung zum Erzeugen einer Befahrbarkeitskarte eines Umgebungsbereiches eines Fahrzeuges
DE102018009114A1 (de) Verfahren zur Bestimmung der Position eines auf einer Verfahrfläche bewegbaren Mobilteils und Anlage mit Mobilteil zur Durchführung des Verfahrens
DE102013202075A1 (de) Bewegungsstrategieerarbeitungs- und/oder Navigationsvorrichtung
DE102020206190A1 (de) System und verfahren für die konfiguration einer arbeitsstätten warnzone.
EP3559773A1 (fr) Procédé de navigation et localisation automatique d'un appareil d'usinage à déplacement autonome
DE102019218192A1 (de) Verfahren zum Bearbeiten von Pflanzen auf einem Feld
DE102015122149A1 (de) Verfahren zum autonomen Betrieb einer Verdichtungsvorrichtung
DE102018113015A1 (de) Autonome, mobile Arbeitmaschine
DE102011115354A1 (de) Navigation anhand von zufälligen Mustern
DE102017109130A1 (de) Einrichtung und Verfahren zur Reinigung eines Offenstalls
DE102014208434A1 (de) Autonomes Fahrzeug und Navigationsverfahren für ein autonomes Fahrzeug zur Kollisionsvermeidung
EP3553618A1 (fr) Dispositif de travail
DE102017210830A1 (de) Erzeugen und Verwenden einer Randbebauungskarte

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200615

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220602