WO2023175580A1 - Working method for controlling an unmanned autonomous vehicle based on a soil type - Google Patents
Working method for controlling an unmanned autonomous vehicle based on a soil type Download PDFInfo
- Publication number
- WO2023175580A1 WO2023175580A1 PCT/IB2023/052640 IB2023052640W WO2023175580A1 WO 2023175580 A1 WO2023175580 A1 WO 2023175580A1 IB 2023052640 W IB2023052640 W IB 2023052640W WO 2023175580 A1 WO2023175580 A1 WO 2023175580A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- autonomous vehicle
- terrain
- unmanned autonomous
- unmanned
- neural network
- Prior art date
Links
- 239000002689 soil Substances 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000013528 artificial neural network Methods 0.000 claims abstract description 190
- 238000012549 training Methods 0.000 claims abstract description 70
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000015654 memory Effects 0.000 claims description 15
- 244000025254 Cannabis sativa Species 0.000 description 50
- 230000009471 action Effects 0.000 description 20
- 239000002361 compost Substances 0.000 description 12
- 238000003971 tillage Methods 0.000 description 10
- 235000005187 Taraxacum officinale ssp. officinale Nutrition 0.000 description 8
- 210000002569 neuron Anatomy 0.000 description 7
- 241000196324 Embryophyta Species 0.000 description 6
- 240000001949 Taraxacum officinale Species 0.000 description 6
- 239000004567 concrete Substances 0.000 description 6
- 238000012423 maintenance Methods 0.000 description 5
- 239000004575 stone Substances 0.000 description 5
- 241000219793 Trifolium Species 0.000 description 4
- 244000035744 Hura crepitans Species 0.000 description 3
- 239000010921 garden waste Substances 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 241000245665 Taraxacum Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 239000007921 spray Substances 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 230000002730 additional effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000010791 domestic waste Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 229920000136 polysorbate Polymers 0.000 description 1
- 229940036310 program Drugs 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
Definitions
- the invention relates to a method for controlling an unmanned autonomous vehicle based on a soil type on a terrain, to an unmanned autonomous vehicle suitable for carrying out the method and to a use of the method and/or the unmanned autonomous vehicle for the autonomous maintenance of a garden.
- WO ‘528 describes a method for controlling a tillage means based on image pro- cessing.
- the tillage means comprises a driving means and a tool.
- I n a first step of the method, at least one image of the soil is recorded with the aid of a digital ac- quisition device placed on the tillage means.
- at least one convolution operation is performed on the image by means of a neural network, whereby at least one description of the soil is obtained, after which a control signal for the driving means or the tool is obtained based on the at least one description.
- the neural network is trained using a multitude of images.
- This known method has the disadvantage that it only performs a certain action on the basis of a determination of a soil type. As a result, it is possible that an action is performed at a certain location that is not desired there. For example, the tillage means at one location might be allowed to drive over fallen leaves and perform a certain action there, but at another location the fallen leaves could hide a soil type where this action is unnecessary or could even damage the tillage means.
- Another disadvantage of the method is that the neural network must be trained with the aid of an enormous number of images.
- the aim of the invention is to provide a method which eliminates those disad- vantages.
- the present invention relates to a method according to claim 1 .
- the unmanned vehicle comprises both at least one global neural network and at least one local neural network.
- Having at least one global neural network is advantageous because it allows the unmanned vehicle to be deployed im mediately, without a user having to first create a training set of images of the entire terrain and train a neural network. This would require a lot of time for the user to capture a large number of images of the terrain and to annotate all of these images. This is also expensive for a user because a lot of memory stor- age is required to save the training set.
- the at least one local neural network is advantageous to obtain a better classification in specific cases where determining at least one classification of at least one soil type in an image by the global network results in incorrect classification, using the at least one local neural network.
- the at least one local neural network is additionally advantageous because only a lower number of images in the local training set is required to achieve a better classifica- tion in those specific cases, compared to retraining the at least one global neural network by adding images from the terrain to the global training set.
- the latter solution would require an enormous number of images to have any impact on the training of the at least one global neural network for those specific cases, since the global training set already contains a plethora of images from a large number of different terrains.
- one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network are selected based on the position of the unmanned au- tonomous vehicle.
- a user does not have to train a local neural network for the entire terrain, but only for those parts where a global neural network pro- prises a misclassification, making capturing the images for a local training set and training a local neural network only requires limited time and memory. Because several global neural networks and/or several local neural networks can be selected, it is possible to make an optimal selection according to the desired classifications in a part of the terrain. Preferred forms of the device are shown in claims 2 to 15.
- a specific preferred form concerns a method according to claim 2.
- control signals are created on the basis of predetermined rules and that there are at least two sets of predetermined rules, a set of predetermined rules being selected depending on the position of the un- manned autonomous vehicle. This is advantageous because it allows the same se- lection of neural networks to be used in different parts of the terrain, so that the same classifications will be obtained with the same soil types, but which still creates a different control signal. This way, it can be avoided that an action is performed on a certain part of a terrain where it is not desired.
- the present invention relates to an unmanned autonomous ve- hicle according to claim 16.
- Such an unmanned autonomous vehicle is advantageous because it can be imme- diately deployed by a user of the unmanned autonomous vehicle on a terrain for perform ing tasks without additional training of a neural network, while a local neural network can be trained with limited user effort, so that incorrect classifications in a part of the terrain can be easily corrected.
- the present invention relates to a use according to claim 18.
- This use results in an advantageous autonomous maintenance of a garden using an un- manned autonomous vehicle because the unmanned autonomous vehicle can be immediately used by a user for garden maintenance, without the user having to train a neural network of the unmanned autonomous vehicle and because the user can train a local network with very lim ited effort, if the unmanned autonomous ve- hicle performs an unwanted action or fails to perform a desired action due to incor- rect classifications in a part of the garden, to correct the incorrect classifications.
- Figure 1 shows a schematic representation of a terrain, indicating different zones.
- Figure 2 shows a schematic representation of global neural networks and local neural networks contained in an unmanned autonomous vehicle, according to an embodiment of the present invention.
- Figure 3 shows a schematic representation of a selection of neural networks and a set of rules according to a position of an unmanned autonomous vehicle, according to an embodiment of the present invention, in a terrain.
- Quoting numerical intervals by endpoints comprises all integers, fractions and/or real numbers between the endpoints, these endpoints included.
- a neural network refers to an artificial neural net- work, wherein the neural network includes inputs, nodes, called neurons, and out- puts. An input is connected to one or more neurons. An output is also connected to one or more neurons. A neuron can be connected to one or more neurons. A neural network can comprise one or more layers of neurons between an input and an out- put. Each neuron and each connection of a neural network typically has a weight that is adjusted during a training phase using a training set of sample data.
- the invention relates to a method for controlling an unmanned autonomous vehicle based on a soil type on a terrain.
- the unmanned autonomous vehicle comprises a drive unit for moving the unmanned vehicle over the terrain, a camera for captur- ing images of the terrain, a positioning means for determining a position of the unmanned vehicle on the terrain, a processor and memory and a tool.
- the drive unit preferably comprises at least one wheel and a motor for driving the wheel.
- the motor is an electric motor.
- the unmanned auton- omous vehicle comprises a battery for powering the motor and other electrical sys- tems.
- the unmanned autonomous vehicle may comprise two, three, four or more wheels, wherein at least one wheel, preferably at least two wheels, are coupled to the motor for driving. I t will be ap- parent to one skilled in the art that the at least one wheel can be part of a caterpillar track, the caterpillar track being drivable by the motor by means of the at least one wheel.
- the unmanned autonomous vehicle comprises a steering device for steering the unmanned autonomous vehicle.
- the steering device is a conventional steering device in which at least one wheel is rotatably arranged.
- the steering device is part of the drive unit, wherein two wheels on opposite sides of the un- manned autonomous vehicle can be driven differently by the motor. Differently means with a different speed and/or opposite direction of rotation.
- the steering device may or may not be part of the drive unit.
- the camera is a digital camera.
- the camera is at least suitable for taking two-di- mensional images.
- the camera is suitable for taking three-dimensional images, with or without depth determination.
- the camera has a known viewing an- gle.
- the camera has a known position and alignment on the unmanned autonomous vehicle.
- the camera is positioned in such a way that at least part of the soil of the terrain is captured in the image. Because the viewing angle of the camera and the position and alignment of the camera on the unmanned autonomous vehicle are known, a position, relative to the position of the unmanned vehicle, of a soil type of the terrain that is visible on an image captured by the digital camera, is known.
- the camera has a fixed position and alignment on the unmanned autonomous vehicle.
- the camera is rotatably arranged, the camera being 360° rotatable in a horizontal plane and 180° rotatable in a vertical plane.
- the rotatable arrangement of the camera is preferably drivably coupled to motors with encoders.
- Motors with encoders are advantageous for knowing the position and alignment of a rotatably mounted camera.
- the camera is also suitable for capturing images with non-visible light, such as infrared light or ultraviolet light. This is advantageous be- cause it allows images of the terrain to be captured with visible light, infrared light, and ultraviolet light, from which different information can be obtained, which can be advantageously combined for a successful classification of a soil type of the ter- rain visible in a captured image.
- various cameras can also be combined, for example, wherein a first camera captures images using visible light, a second camera captures images using infrared light, and a third camera captures images using ultraviolet light.
- the first camera, the second camera, and the third camera have an overlapping field of view. This is advantageous for combining information from images captured using the first camera, the second camera, and the third camera.
- the unmanned autonomous vehicle can comprise several similar cameras.
- the positioning means for determ ining the position of the unmanned vehicle may be any suitable means.
- the positioning means is, for example, a Global Navigation Satellite System (GNSS) , such as GPS, GLONASS or Galileo.
- GNSS Global Navigation Satellite System
- the positioning means is, for example, a system with wireless beacons on the terrain, whereby the un- manned autonomous vehicle determines a position on the terrain by triangulation.
- the positioning means is, for example, based on recognition of reference points in images of the terrain, for example images made with the aid of the camera of the unmanned autonomous vehicle.
- Knowing the viewing angle of the camera and the position and the alignment of the camera on the unmanned autonomous vehicle it is possible by means of trigonometry and/or photogrammetry to automatically es- timate a distance from a reference point in an image to the camera and the un- manned autonomous vehicle, a distance between two reference points in an image and/or a dimension of a reference point in an image, even if the camera is only suitable for taking two-dimensional images, so that the position of the unmanned autonomous vehicle on the terrain can be determ ined.
- the memory comprises a working memory and non-volatile memory for storing, for example, data, such as images captured by the camera, and, for example, a pro- gram executed by the processor, wherein the method is executed when the program is executed.
- the tool is any tool suitable for perform ing a task on the terrain. Non-limiting ex- amples of suitable tools are a lawnmower, a vacuum cleaner, a brush, a spray lance, pruning shears, etc.
- the method comprises the steps of:
- the terrain can be indoors, for example a factory hall or a function room , or outdoors, for example a garden, a soccer field, or a square.
- the at least one image may or may not be stored in the memory of the unmanned autonomous vehicle.
- the at least one image may or may not be transmitted by the unmanned autonomous vehicle to an external processing unit. It will be apparent to one skilled in the art that if the unmanned autonomous vehicle moves to another position on the terrain, at least one image of the terrain is captured again using the camera of the unmanned autonomous vehicle. Preferably, during movements by the unmanned autonomous vehicle along a trajectory from a first position to a second position, recordings are made at regular intervals.
- the images are captured at an adj usted rate, expressed in frames per second, so that consecutive images overlap at least partially. It will be apparent to one skilled in the art that the speed of capturing images depends on the speed at which the unmanned autonomous vehicle moves on the terrain.
- the un- manned autonomous vehicle comprises at least one global neural network and at least one local neural network.
- a global neural network has been trained using a global training set.
- a global training set comprises at least 70% images of terrains other than the terrain where the unmanned autonomous vehicle is located.
- a global training set comprises at least 80% images of other terrains, more preferably at least 90% , even more preferably at least 95% , and even more preferably at least 99.99% .
- a global training set comprises images of many terrains to obtain a global neural network that can be used as a standard for a multitude of different terrains.
- a global neural network is preferably trained on an external processing unit and loaded into the memory of the unmanned autonomous vehicle. It will be apparent to one skilled in the art that if the unmanned autonomous vehicle com- prises several global neural networks, the global neural networks were preferably trained with different global training sets.
- Having at least one global neural network is advantageous because it allows the unmanned vehicle to be deployed immediately, without a user having to first create a training set of images of the entire terrain and train a neural network. This would require a lot of time for the user to capture a large number of images of the terrain and to annotate all of these images. This is also expensive for a user because a lot of memory storage is required to save the training set.
- a local neural network has been trained using a local training set.
- a local training set comprises only images of the terrain where the unmanned autonomous ve- hicle is located. The images of a local training set were created using a separate digital camera device, a GSM device with a camera, a tablet with a camera, the camera of the unmanned autonomous vehicle, or another suitable camera, and added to the mentioned local training set.
- a local neural network is trained on an external processing unit and loaded into the memory of the unmanned autono- mous vehicle. Alternatively, a local neural network is trained on the processor of the unmanned autonomous vehicle. In that case, a local training set is also avail- able in the memory of the unmanned autonomous vehicle.
- one or more neural networks from a group formed by the at least one global neural network and the at least one local neural network are selected. I n concrete terms, this means that one or more neural networks are selected from all global neural networks and all local neural networks com- prised in the unmanned autonomous vehicle.
- each of the selected neural networks determines at least one classification of at least one soil type in the at least one image.
- a classification can be binary, for example a soil type is grass (1) or is not grass (0) , or can also be a value that represents a probability, for example a soil type with 83% probability of grass.
- a positive classification refers to a classification with a probability greater than 60% , pref- erably greater than 75% , more preferably greater than 90% , and even more preferably greater than 98% . It will be apparent to one skilled in the art that in a binary system , a value of 1 is a positive classification and a value of 0 is not a positive classification.
- a selected neural network can be trained for one or more classifications.
- classifications are grass, gravel, stone floor, soil, flower bed, leaves, parquet, vegetable garden, etc.
- Selected neural networks can be trained for different classifications, equal classifications, or for only partially different or equal classifications.
- An image can comprise a single soil type.
- An image can comprise several soil types, which are separated from each other, which run through each other or a combination of both.
- the at least one local neural network is advantageous to obtain a better classifi- cation in specific cases where determining at least one classification of at least one soil type in an image by the global network results in incorrect classification, using the at least one local neural network.
- the at least one local neural network is additionally advantageous because only a lower number of images in the local training set is required to achieve a better classification in those specific cases, compared to retraining the at least one global neural network by adding images from the terrain to the global training set.
- the latter solution would require an enormous number of images to have any impact on the training of the at least one global neural network for those specific cases, since the global training set already contains a plethora of images from a large number of different terrains.
- one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network are selected based on the position of the unmanned autonomous vehicle.
- a user does not have to train a local neural network for the entire terrain, but only for those parts where a global neural network produces a mis- classification, making capturing the images for a local training set and training a local neural network only requires limited time and memory.
- several global neural networks and/or several local neural networks can be selected, it is possible to make an optimal selection according to the desired classifications in a part of the terrain. For example, a large part of a garden could be a lawn. A correct classification of grass is determined everywhere in the garden for images of the lawn, except around a tree.
- a local neural network By training a local neural network with a limited set of images and selecting this local neural network only around the tree, the sparse grass around the tree is properly mowed. If there is also a flower bed in the vicinity of the tree, for ex- ample, a second local neural network can be trained with images of the flower bed, and both local neural networks can be selected in the vicinity of the tree to successfully distinguish between the sparse grass and the flower bed around the tree.
- the processor for the drive unit and/or tool creates at least one control signal by the processor for the drive unit and/or tool.
- the at least one control signal is created based on the classifications deter- mined by the selected neural networks.
- the at least one control signal determ ines which action the drive unit and/or the tool perform .
- the at least one control signal determines how the unmanned autonomous vehicle moves or which tool is used. This is advantageous because a user of an unmanned auton- omous vehicle does not have to determine for every position on the terrain what action the unmanned autonomous vehicle will perform . I t is sufficient to deter- m ine which action should be performed for a specific soil type.
- the at least one control signal determines that the unmanned autonomous vehicle continues to drive over the stone floor and sweep the stone floor.
- Other examples include mowing grass if grass is determ ined as a positive classification for the soil type, watering a flower bed if flowers are determined as a positive classification for the soil type, sweeping up leaves if fallen leaves are determined as a positive classi- fication for the soil type, etc.
- the at least one control signal is preferably created based on a classification for a soil type im mediately adjacent to the unmanned autonomous vehicle, preferably in a driving direction of the unmanned autono- mous vehicle.
- the at least one control signal is preferably created based on m ultiple classifications for the soil types immediately adjacent to the unmanned autonomous vehicle, preferably in a driving direction of the unmanned autonomous vehicle.
- a histogram of soil type classifications is created for an area im mediately adjacent to the unmanned autonomous vehicle, wherein the at least one control signal is created based on a classification with a highest value in the histogram .
- the at least one control signal is created ac- cording to predetermined rules based on the classifications determined by the se- lected neural networks. At least two different sets of predetermined rules are de- fined. A set of predeterm ined rules is selected depending on the position of the unmanned autonomous vehicle.
- This embodiment is advantageous for determining other actions by the unmanned autonomous vehicle, depending on the position of the unmanned autonomous ve- hicle on the terrain. For example, fallen leaves can be on a lawn, making it possible that on certain parts of the lawn no grass is visible anymore.
- the unmanned auton- -mous vehicle could cut the grass even further, at least partially shredding the fallen leaves. At the edge of the lawn, for example, a curbstone may be hidden under the fallen leaves. If the unmanned autonomous vehicle continues to mow the grass here, it is very likely that the tool of the unmanned vehicle for mowing the grass will be damaged by the curbstone. So here it is not desirable to cut the grass further. Based on the classifications determined by the selected neural networks, it is not possible to distinguish between both positions.
- a rule may specify that if grass, soil, and weeds have been determ ined as positive classifications for soil types, then mowing will take place at these positions. This is particularly advantageous compared to a con- trol signal created based on a classification with the highest value in a histogram , as in a previously described embodiment. For example, in a severe drought, it is possible that grass is m uch sparser than usual and there is more soil and weeds, so that, for example, soil is the highest value in the histogram , and a decision could be made not to mow, while the unmanned autonomous vehicle is actually on a lawn.
- the at least one control signal is created based on m ultiple positive classifications for the soil types and based on the selected set of rules, wherein the selected set of rules comprises a threshold value for a surface area of at least one soil type with a positive classification.
- the selected set of rules may comprise a minimum value for the surface area of the grass.
- This threshold value can be expressed as an absolute value, for exam- ple, an area in m 2 or a number of pixels in a camera image, or as a relative value, for example, a percentage of a total area in m 2 or in pixels in a camera image or a share in a histogram .
- the selected set of rules could comprise a threshold value of, for example, at least 50% grass or at least 3 m 2 of grass to be allowed to mow at that position.
- the selected set of rules could comprise a threshold value that, for example, allows a maximum of 30% soil or a maxim um of 0.5 m 2 soil to be allowed to mow at that position. It is also clear from this example that threshold values can be logically combined.
- a specific selection of a set of predetermined rules is made for each control signal based on the position of the unmanned auton- omous vehicle.
- This embodiment is advantageous because it allows a new set of predetermined rules to be selected at a certain position for a first control signal, while for a second control signal, a previously selected set of rules can be retained or, for example, another set of predetermined rules can be selected.
- a new set of pre- determined rules could be selected for a control signal for the drive unit at a position where the terrain is often wet, where the speed of the unmanned autonomous ve- hicle is reduced if soil has been determined as positive classifications for the soil type, to prevent wheels of the drive unit from slipping and damaging the terrain, while the control signal for a tool for picking up fallen leaves still uses the same collection of predetermined rules.
- the one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network, in addition to the position of the unmanned autonomous vehicle are also selected on the basis of time.
- time can be a time of a day, a day of a week, a month, or a season.
- This embodiment is advantageous because a terrain can have a different appearance during a day, week, month, or season, making it difficult to correctly classify soil types.
- the area may be in the shade at dusk, certain types of flowers may only be present in the spring, etc.
- the chosen neural networks can be optimized for a time. This embodiment is applicable to both global neural networks and local neural networks.
- the set of predetermined rules is also selected based on time, in addition to the position of the unmanned autonomous vehicle.
- time can be a time of a day, a day of a week, a month, or a season.
- This embodiment is advantageous because during a day, week, month or season, other tasks on a terrain are necessary.
- certain plants are preferably sprayed in the morning or evening.
- drought in seasons outside of sum mer is less of a problem than in summer.
- a specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network is made for each control signal based on the position of the unmanned autonomous vehicle.
- This embodiment is advantageous because as a result a new selection of neural networks can be made at a certain position for a first control signal, while for a second control signal a selection of neural networks that has already been made can be retained or, for example, another selection of neural networks can be made.
- a new selection of neural networks could be made, for example, where a local neural network is added to the selection of neural net- works in order to classify the flowerbeds as soil type, to prevent the mowing tool from mowing the flowerbeds, while the drive control signal still uses the same se- lection of neural networks.
- a first group of zones are defined on a digital map of the terrain.
- the digital map is presented visually in a graphical application, wherein the first group of zones are drawn graphically on the digital map.
- the graphical application is preferably suitable for use on a smartphone and/or a tablet and/or a computer.
- the graphical application is suitable for use in a web browser.
- a specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network is associated with each zone of the first group of zones.
- the association also takes place in the aforementioned graphical application.
- the digital map of the terrain comprises the first group of zones and the associated selections of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network.
- the digital map is loaded into the unmanned autonomous vehicle.
- the digital map is loaded into the unmanned vehicle using a data cable.
- a non-limiting example is a USB cable.
- the digital map is loaded into the unmanned vehicle over a wireless con- nection.
- Non-lim iting examples are a Bluetooth connection or a WiFi connection.
- the un- manned autonomous vehicle uses the digital map for this.
- the unmanned autonomous vehicle uses the therewith associated selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network while processing the at least one image of the terrain.
- This embodiment is advantageous because a user can simply define on a digital map a first group of zones on the terrain where, for example, using a standard global neural network is not sufficient for a correct classification and where a different selection of neural networks is required. Due to the use of a digital map, this does not require the intervention of, for example, a technician. The user only has to associate an appropriate selection of neural networks with a zone from the first group of zones. This can be done easily in a graphical application, for example.
- a second group of zones is defined on a digital map of the terrain.
- the digital map may or may not be equal to the digital map in a previously described embodiment on which a first group of zones is defined.
- the digital map can preferably be represented in the same way as in the previously described embodiment, and the second group of zones can preferably be defined in the same way.
- Each zone of the second group of zones is associated with a specific selection of a set of predetermined rules.
- the association also takes place in a graphical application, as described in the aforementioned embodiment.
- the digital map comprises the zones from the second group of zones and the associated selections of sets of predetermined rules.
- the digital map is loaded into the un- manned autonomous vehicle.
- the digital map is loaded into the un- manned autonomous vehicle in the same manner as described in the aforemen- tioned embodiment.
- the unmanned autonomous vehicle determines the zone from the second group of zones where the unmanned autonomous vehicle is located based on the position of the unmanned autonomous vehicle on the terrain.
- the unmanned autonomous vehicle uses the therewith associated set of predetermined rules while creating the at least one control signal.
- This embodiment is advantageous because a user can easily define a second group of zones on the terrain on a digital map where, for example, a different set of pre- determined rules is required, for example because a different action by the un- manned autonomous vehicle is desired in case of equal positive classifications of a soil type. Due to the use of a digital map, this does not require the intervention of, for example, a technician. The user only needs to associate a suitable set of prede- termined rules with a zone from the second group of zones. This can be done easily in a graphical application, for example.
- the first group of zones is equal to the second group of zones. This is advantageous because it makes it clear to a user which selection of neural networks and which set of predetermined rules will be used by the unmanned autonomous vehicle at a position in the terrain.
- the zones only need to be defined once.
- zones are defined hierarchically.
- Associated selections of neural networks and/or associated sets of predeterm ined rules from a hierarchically higher zone are automatically associated with a hierarchically lower zone.
- Associated selections of neural networks and/or associated sets of predeter- m ined rules from a hierarchically lower zone are not associated with a hierarchically higher zone.
- an associated selection of neural networks and/or associ- ated set of predetermined rules of a lower hierarchical zone for a specific control signal takes precedence over an associated selection of neural networks and/or as- sociated set of predetermined rules of a higher hierarchical zone.
- This embodiment is advantageous for easily defining smaller zones where at least one control signal has to be created in a different way, for example because a dif- ferent or an additional action has to be performed by the unmanned autonomous vehicle or, for example, because a different selection of neural networks is required to obtain correct classifications for a certain action, so that a correct control signal is effectively created for the said action.
- An action can, for example, be mowing the grass, but it can also be stopping the mowing of grass.
- the associated selection of neural networks and/or the associated set of predeter- m ined rules of a hierarchically higher zone can still be used. As a result, it is not necessary to divide the entire terrain into many smaller zones and to associate a selection of neural networks and/or a set of predetermined rules with all zones for all possible control signals.
- images for a local training set are captured using the camera of the unmanned autonomous vehicle.
- This embodiment is advan- tageous because no additional camera is needed to capture images for the local training set.
- This embodiment is additionally advantageous because images for the local training set are captured in the same orientation relative to the unmanned autonomous vehicle as when using the unmanned autonomous vehicle for perform- ing tasks.
- This embodiment is particularly advantageous because the images for the local training set can be captured during the use of the unmanned autonomous vehicle for performing tasks or during an autonomous exploration of at least a part of the terrain.
- the images of the local training set are pro- De- Defined for training a local neural network using the processor of the unmanned autonomous vehicle.
- the images of the local training set are stored in the memory of the unmanned autonomous vehicle for this purpose.
- the images are labeled using an external device, such as a smartphone, a tablet or a computer, or on a screen comprised in the unmanned autonomous vehicle.
- the images are first sent from the unmanned vehicle over a wired or wireless connection to the external device, after which the labels are sent back to the unmanned autonomous vehicle.
- suitable wired or wireless connections are provided in previously described embodiments.
- the la- beled images are processed by the processor to train the local neural network.
- This embodiment is advantageous because no capacity on external processing units, such as a server in a cloud environment, needs to be reserved to train a local neural network.
- the images of the local training set are forwarded to an exter- nal processing unit and added to one or more global training sets.
- the images can be labeled before or after.
- the images are labeled afterwards, preferably by a technician.
- a selection of images from the local training set is added to one or more global training sets.
- the selection of images from the local training set does not have to be the same for each global training set.
- the one or more global training sets are processed by the external processing unit for training one or more global neural networks.
- This embodiment is advantageous for incrementally obtaining large global training sets and for incrementally improving global neural networks. This is particularly advantageous because local training sets are often created in cases where global neural networks do not classify soil types accurately enough.
- the unmanned autonomous vehicle deter- m ines its position using a digital map of the terrain and based on images of the terrain, which are captured using the camera of the unmanned autonomous vehicle. Reference points in the images are compared with reference points on the digital map.
- This embodiment is advantageous because an unmanned autonomous vehicle only needs a camera for both capturing images for classification of soil types and determining the position of the unmanned autonomous vehicle on the terrain.
- At least one of the selected neural networks determines at least one classification of at least one object in the at least one image.
- This can be both a global neural network and/or a local neural network.
- the at least one control signal is created based on the classifications for soil types and objects determined by the selected neural networks.
- Previously described embodiments are also applicable, mutatis mutandis, for training a local or neural network or a global neural network for classifying an object.
- This embodiment is advantageous if a task to be performed by the unmanned autonomous vehicle depends not only on a soil type, but also on the presence or absence of an object, for example the presence of a charging station for charging a battery of the unmanned autonomous vehicle or availability of a waste container for dumping household or garden waste, for exam- ple.
- a weight is assigned to each of the classifi- cations determined by the selected neural networks.
- the weight depends on the position of the unmanned autonomous vehicle. Weights are advantageous, for ex- ample, for use as threshold values in predetermined rules. Weights are advanta- geous, for example, if an image comprises multiple soil types which run together, to give more or less weight to a certain positive classification. Weights are particu- larly advantageous for use at zone edges.
- Weights that gradually change as they approach the edge of a zone can provide a smooth transition from a first zone to a second zone, for example by gradually transitioning weights corresponding to a first predetermined rule in a first zone to weights corresponding to a second predeter- mined rule in a second zone, or for example by gradually transitioning weights given to certain positive classifications in a first zone to weights given to the same positive classifications in a second zone.
- a soil type belongs to a set of predetermined soil types based on the classifications determined by the selected neural networks.
- the unmanned autonomous vehicle only moves across soil types from the set of predetermined soil types.
- This embodiment is advantageous because it allows the unmanned autonomous ve- hicle to move autonomously over a terrain, wherein the unmanned autonomous vehicle remains within a perimeter on the terrain.
- the perimeter is determined by a transition between soil types that belong to the set of predetermined soil types and soil types that do not belong to the set of predeterm ined soil types.
- the un- manned autonomous vehicle cannot cross this transition. For example, grass be- longs to the set of predetermined soil types, while flower beds, soil, and terrace are not part of the set of predetermined soil types.
- the unmanned autono- mous vehicle remains on a lawn that is bordered by flower beds and a terrace. It is therefore not necessary to create physical boundaries around the lawn or to place or bury a signal wire around part of the terrain.
- the set of predetermined soil types depends on the position of the unmanned autonomous vehicle on the terrain.
- This embodiment is particularly advantageous for creating corridors for the unmanned vehicle be- tween different zones on the terrain.
- the unmanned autonomous vehicle comprises, for example, a brush as a tool for brushing the tiled terraces.
- the predeterm ined set of soil types consists of tiled terraces, so the un- manned vehicle will only move across the tiled terraces. This does mean that the unmanned autonomous vehicle will only brush one tiled terrace, because the un- manned vehicle cannot cross the gravel path to move to the other tiled terrace.
- a gravel path can be added as a soil type to the set of predeterm ined soil types at the level of the desired corridor, allowing the unmanned autonomous vehicle to move across the gravel path to the other tiled terrace at the level of the corridor.
- a gravel path has not been added to the set of predeter- mined soil types, causing the unmanned autonomous vehicle not to enter the gravel path there. It is clear that in this example, based on the position of the unmanned autonomous vehicle, it can be ensured that a control signal is created for the brush so that the unmanned autonomous vehicle does not brush the gravel path.
- the invention in a second aspect, relates to an unmanned autonomous vehicle for performing tasks on a terrain.
- the unmanned autonomous vehicle comprises a drive unit for moving the unmanned vehicle over the terrain, a camera for captur- ing images of the terrain, a positioning means for determining a position of the unmanned vehicle on the terrain and a tool.
- the drive unit preferably comprises at least one wheel and a motor for driving the wheel.
- the motor is an electric motor.
- the unmanned auton- omous vehicle comprises a battery for powering the motor and other electrical sys- tems.
- the unmanned autonomous vehicle can comprise two, three, four, or more wheels, wherein at least one wheel, preferably at least two wheels, are coupled to the motor for driving.
- the at least one wheel can be part of a caterpillar track, the caterpillar track being drivable by the motor by means of the at least one wheel.
- the unmanned autonomous vehicle comprises a steering device for steering the unmanned autonomous vehicle.
- the steering device is a conventional steering device in which at least one wheel is rotatably arranged.
- the steering device is part of the drive unit, wherein two wheels on opposite sides of the un- -anned autonomous vehicle can be driven differently by the motor. Differently means with a different speed and/or opposite direction of rotation.
- the steering device may or may not be part of the drive unit.
- the camera is a digital camera.
- the camera is at least suitable for taking two-di- mensional images.
- the camera is suitable for taking three-dimensional images, with or without depth determination.
- the camera has a known viewing an- gle.
- the camera has a known position and alignment on the unmanned autonomous vehicle.
- the camera is positioned in such a way that at least part of the soil of the terrain is captured in the image. Because the viewing angle of the camera and the position and alignment of the camera on the unmanned autonomous vehicle are known, a position, relative to the position of the unmanned vehicle, of a soil type of the terrain that is visible on an image captured by the digital camera, is known.
- the camera has a fixed position and alignment on the unmanned autonomous vehicle.
- the camera is rotatably arranged, the camera being 360° rotatable in a horizontal plane and 180° rotatable in a vertical plane.
- the rotatable arrangement of the camera is preferably drivably coupled to motors with encoders.
- Motors with encoders are advantageous for knowing the position and alignment of a rotatably mounted camera.
- the camera is also suitable for capturing images with non-visible light, such as infrared light or ultraviolet light. This is advantageous be- cause it allows images of the terrain to be captured with visible light, infrared light, and ultraviolet light, from which different information can be obtained.
- first camera captures images using visible light
- second camera captures images using infrared light
- third cam- era captures images using ultraviolet light
- the first camera, the second camera, and the third camera have an overlapping field of view. This is advanta- geous for combining information from images captured using the first camera, the second camera, and the third camera.
- the unmanned autonomous vehicle can comprise several sim ilar cameras.
- the positioning means for determ ining the position of the unmanned vehicle may be any suitable means.
- the positioning means is, for example, a Global Navigation Satellite System (GNSS) , such as GPS, GLONASS or Galileo.
- GNSS Global Navigation Satellite System
- the positioning means is, for example, a system with wireless beacons on the terrain, whereby the un- manned autonomous vehicle determines a position on the terrain by triangulation.
- the positioning means is, for example, based on recognition of reference points in images of the terrain, for example images made with the aid of the camera of the unmanned autonomous vehicle.
- Knowing the viewing angle of the camera and the position and the alignment of the camera on the unmanned autonomous vehicle it is possible by means of trigonometry and/or photogrammetry to automatically es- timate a distance from a reference point in an image to the camera and the un- manned autonomous vehicle, a distance between two reference points in an image and/or a dimension of a reference point in an image, even if the camera is only suitable for taking two-dimensional images, so that the position of the unmanned autonomous vehicle on the terrain can be determined.
- the tool is any tool suitable for perform ing a task on the terrain.
- suitable tools are a lawnmower, a vacuum cleaner, a brush, a spray lance, pruning shears, etc.
- the unmanned autonomous vehicle comprises a memory and a processor.
- the pro- cessor is configured to perform a method according to the first aspect.
- the memory comprises a working memory and a non-volatile memory.
- Such an unmanned autonomous vehicle is advantageous because it can be imme- diately deployed by a user of the unmanned autonomous vehicle on a terrain for performing tasks without additional training of a neural network, while a local neural network can be trained with lim ited user effort, so that incorrect classifications in a part of the terrain can be easily corrected.
- the camera of the unmanned vehicle is only suitable for taking two-dimensional images.
- the camera of the unmanned vehicle is only suitable for taking two-dimensional images.
- only one camera for two- dimensional images is mounted on the unmanned autonomous vehicle.
- This embodiment is particularly advantageous because it results in a very simple unmanned autonomous vehicle for performing tasks on a terrain.
- a method according to the first aspect is preferably performed with an unmanned autonomous vehicle according to the sec- ond aspect and that an unmanned autonomous vehicle according to the second aspect is preferably configured for performing a method according to the first as- pect.
- an unmanned autonomous vehicle according to the second aspect is preferably configured for performing a method according to the first as- pect.
- the invention relates to a use of a method according to the first aspect and/or an unmanned autonomous vehicle according to the second aspect for autonomously maintaining a garden.
- This use results in an advantageous autonomous maintenance of a garden using an unmanned autonomous vehicle because the unmanned autonomous vehicle can be immediately used by a user for garden maintenance, without the user having to train a neural network of the unmanned autonomous vehicle and because the user can train a local network with very lim ited effort, if the unmanned autonomous ve- hicle performs an unwanted action or fails to perform a desired action due to incor- rect classifications in a part of the garden, to correct the incorrect classifications.
- the Invention is described by way of non-limiting figures illustrating the invention, and which are not intended to and should not be interpreted as lim- iting the scope of the invention.
- Figure 1 shows a schematic representation of a terrain, indicating different zones.
- the terrain comprises seven zones.
- the zones are arranged hierarchically in two layers.
- a highest hierarchical layer comprises a first zone (1 ) , which is shaded in Figure 1 , and within zone (1 ) three adjacent zones, namely a second zone (2) , a third zone (3) and a fourth zone (4).
- a lowest hierarchical layer comprises three zones, namely a fifth zone (5) located within the second zone (2) , a sixth zone (6) located within the third zone (3) and a seventh zone (7) located within the fourth zone (4) .
- the terrain is a garden, wherein the first zone (1 ) is for example a border with bushes, the second zone (2) is a lawn, the third zone (3) is a lawn with a concrete path and the fourth zone (4) is again a lawn.
- a charging station is in- stalled in zone (5) .
- a compost bin has been placed in zone (6) .
- the zone (7) is a sandbox.
- Figure 2 shows a schematic representation of global neural networks and local neural networks contained in an unmanned autonomous vehicle, according to an embodiment of the present invention.
- the unmanned autonomous vehicle in this example has two global neural networks (A) and (B) and two local neural networks (C) and (D).
- classification (a) is grass, classification (b) clover, classification (c) soil, classification (d) dandelion, classification (e) bark, and classi- fication (f) a charging station.
- the global neural network (B) is trained to determine five classifications (f), (g), (h) , (i) and (j) .
- the classification (f) is again a charging station, but the global neural network (B) is trained with a different global training set in this example, which means that the global neural networks (A) and (B) can obtain different results for classification (f) .
- Classification (g) is a person
- classification (h) is a tree
- classifica- tion (i) is a car
- classification (j) is a plant.
- the global neural network (B) in this example is specifically trained to determ ine object classifications rather than soil type classifications.
- the local neural network (C) is trained to determ ine two specific classifications for the specific terrain from this example.
- Classification (k) is a compost bin and clas- sification (I) is a bucket.
- the local neural network (D) is trained to determ ine one specific classification for the specific terrain in this example.
- Classification (m) is sand.
- Figure 3 shows a schematic representation of a selection of neural networks and a set of rules according to a position of an unmanned autonomous vehicle, according to an embodiment of the present invention, in a terrain.
- the terrain is the terrain from the example in Figure 1 .
- the neural networks are the global neural networks (A) and (B) and the local neural networks (C) and (D) from Figure 2.
- Zone (1) is associated with a set of predetermined rules (C1) that stipulate that the grass is only mowed if there are positive classifications for grass (a) , for dandelions (d) and for bark (e) . This means that the unmanned autonomous vehicle is located in the border between the bushes, where there is tree bark and where wild grass and dandelions grow, which may be mowed.
- C1 predetermined rules
- Zone (2) is associated with a set of predeterm ined rules (C2) that stipulate that the grass is only mowed if there are positive classifications for clover (b) and for soil (c). This means that the unmanned autonomous vehicle is on the turned over lawn, where any grass and clover may be mowed.
- the set of predetermined rules (C2) may comprise a threshold value for soil (c), whereby at a position in zone (2) mowing may only take place if an area of the soil (c) is lower than the threshold value. This threshold value can be defined as a relative or an absolute value.
- zone (3) there is a concrete path.
- Zone (3) is associated with a set of predetermined rules (C3) that stipulate that the grass is only mowed if there is a positive classification for grass (a). This means that the unmanned autonomous vehicle is definitely on grass and not on the concrete path, which prevents the unmanned vehicie from being damaged by mowing on the con- crete path.
- Zone (4) is associated with a set of predetermined rules (C4) that de- termine that the grass is only mowed if there are positive classifications for grass (a) and dandelions (d) . The intention is that the grass in zone (4) is a bit longer and wilder. Mowing the grass only where dandelions are growing allows the grass to grow until dandelions could grow.
- Zone (5) is associated with both the global neural network (A) and the global neural network (B).
- Zone (5) is associated with a set of predetermined rules (C5) that determine that the battery of the unmanned autonomous vehicie may be charged if there is a positive classification for a charging station (f) , both by the global neural network (A) and the global neural network (B) .
- Zone (6) is associated with both the global neural network (A) and the local neural network (C).
- Zone (6) is associated with a set of predetermined rules (C6), which determ ine that garden waste may be dumped into the compost bin if there are positive classifica- tions for grass (a) , soil (c) and dandelion (d) by the global neural network (A) and a positive classification for a compost bin (k) and not for a bucket (I) by the local neural network (C).
- the compost bin is placed in zone (6) in the grass next to the concrete path.
- the grass around the com- post bin is mowed less often, resulting in dandelions and the grass around the com- post bin disappearing in some places.
- the local neural network (C) it is possible to obtain a positive classification (k) for the specific compost bin in the garden of the example, even if the compost bin is moved and the compost bin is not confused with a specific bucket that is also often used in the garden.
- Only the local neural network (D) is associated with zone (7) .
- Zone (7) is associated with a set of predetermined rules (C6) which stipulate that the sandbox may be swept if there is a positive classification for sand (m).
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present invention relates to a method for controlling an unmanned autonomous vehicle based on a soil type on a terrain, comprising positioning of an unmanned autonomous vehicle, capturing an image of the terrain with a camera of the vehicle, image processing, creating a control signal, wherein the vehicle comprises at least one global and one local neural network, wherein the global network is trained with a global training set comprising at least 70% images of other terrains and the local network is trained with a local training set comprising images of the terrain, wherein, based on the position, one or more networks are selected from a group formed by said global and local networks, wherein, during image processing, each of the selected networks determines at least one classification of at least one soil type in the image, and wherein the control signal is created based on the classifications. The invention also relates to an unmanned autonomous vehicle and a use.
Description
WORKING METHOD FOR CONTROLS NG AN UNMANNED AUTONOMOUS VE- HlCLE BASED ON A SOIL TYPE
TECHNICAL FlELD
The invention relates to a method for controlling an unmanned autonomous vehicle based on a soil type on a terrain, to an unmanned autonomous vehicle suitable for carrying out the method and to a use of the method and/or the unmanned autonomous vehicle for the autonomous maintenance of a garden.
PRIOR ART
Such a method is known, inter alia, from WO 2018/220528 (WO ‘528) .
WO ‘528 describes a method for controlling a tillage means based on image pro- cessing. The tillage means comprises a driving means and a tool. I n a first step of the method, at least one image of the soil is recorded with the aid of a digital ac- quisition device placed on the tillage means. Subsequently, at least one convolution operation is performed on the image by means of a neural network, whereby at least one description of the soil is obtained, after which a control signal for the driving means or the tool is obtained based on the at least one description. The neural network is trained using a multitude of images.
This known method has the disadvantage that it only performs a certain action on the basis of a determination of a soil type. As a result, it is possible that an action is performed at a certain location that is not desired there. For example, the tillage means at one location might be allowed to drive over fallen leaves and perform a certain action there, but at another location the fallen leaves could hide a soil type where this action is unnecessary or could even damage the tillage means. Another disadvantage of the method is that the neural network must be trained with the aid of an enormous number of images. Capturing all these images requires a great deal of time, so that the tillage means can only be used on a terrain after a long period of recording images, unless the neural network of the tillage means has been trained in advance on the basis of an already existing collection of images. This has the additional disadvantage that the neural network is therefore not always optimally trained for describing soil types on the terrain on which the tillage means is active. This can lead to erroneous descriptions of the soil, whereupon erroneous actions
can be performed by the tillage means, again resulting in possible damage to the tillage means.
The aim of the invention is to provide a method which eliminates those disad- vantages.
SUMMARY OF THE INVENTION
In a first aspect, the present invention relates to a method according to claim 1 .
The advantage of this method is that the unmanned vehicle comprises both at least one global neural network and at least one local neural network. Having at least one global neural network is advantageous because it allows the unmanned vehicle to be deployed im mediately, without a user having to first create a training set of images of the entire terrain and train a neural network. This would require a lot of time for the user to capture a large number of images of the terrain and to annotate all of these images. This is also expensive for a user because a lot of memory stor- age is required to save the training set. The at least one local neural network is advantageous to obtain a better classification in specific cases where determining at least one classification of at least one soil type in an image by the global network results in incorrect classification, using the at least one local neural network. The at least one local neural network is additionally advantageous because only a lower number of images in the local training set is required to achieve a better classifica- tion in those specific cases, compared to retraining the at least one global neural network by adding images from the terrain to the global training set. The latter solution would require an enormous number of images to have any impact on the training of the at least one global neural network for those specific cases, since the global training set already contains a plethora of images from a large number of different terrains. It is particularly advantageous that one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network are selected based on the position of the unmanned au- tonomous vehicle. As a result, a user does not have to train a local neural network for the entire terrain, but only for those parts where a global neural network pro- duces a misclassification, making capturing the images for a local training set and training a local neural network only requires limited time and memory. Because several global neural networks and/or several local neural networks can be selected, it is possible to make an optimal selection according to the desired classifications in a part of the terrain.
Preferred forms of the device are shown in claims 2 to 15.
A specific preferred form concerns a method according to claim 2.
An advantage of this embodiment is that the control signals are created on the basis of predetermined rules and that there are at least two sets of predetermined rules, a set of predetermined rules being selected depending on the position of the un- manned autonomous vehicle. This is advantageous because it allows the same se- lection of neural networks to be used in different parts of the terrain, so that the same classifications will be obtained with the same soil types, but which still creates a different control signal. This way, it can be avoided that an action is performed on a certain part of a terrain where it is not desired.
In a second aspect, the present invention relates to an unmanned autonomous ve- hicle according to claim 16.
Such an unmanned autonomous vehicle is advantageous because it can be imme- diately deployed by a user of the unmanned autonomous vehicle on a terrain for perform ing tasks without additional training of a neural network, while a local neural network can be trained with limited user effort, so that incorrect classifications in a part of the terrain can be easily corrected.
A preferred form of the unmanned autonomous vehicle is described in dependent claim 17.
In a third aspect, the present invention relates to a use according to claim 18. This use results in an advantageous autonomous maintenance of a garden using an un- manned autonomous vehicle because the unmanned autonomous vehicle can be immediately used by a user for garden maintenance, without the user having to train a neural network of the unmanned autonomous vehicle and because the user can train a local network with very lim ited effort, if the unmanned autonomous ve- hicle performs an unwanted action or fails to perform a desired action due to incor- rect classifications in a part of the garden, to correct the incorrect classifications.
DESCRIPTION OF THE FlGURES
Figure 1 shows a schematic representation of a terrain, indicating different zones.
Figure 2 shows a schematic representation of global neural networks and local neural networks contained in an unmanned autonomous vehicle, according to an embodiment of the present invention.
Figure 3 shows a schematic representation of a selection of neural networks and a set of rules according to a position of an unmanned autonomous vehicle, according to an embodiment of the present invention, in a terrain.
DETAILED DESCRIPTION
Unless otherwise defined, all terms used in the description of the invention, including technical and scientific terms, have the meaning as commonly understood by a per- son skilled in the art to which the invention pertains. For a better understanding of the description of the invention, the following terms are explained explicitly.
I n this document, “a” and “the" refer to both the singular and the plural, unless the context presupposes otherwise. For example, “a segment” means one or more seg- ments.
The terms “comprise", “comprising”, “consist of", “consisting of", “provided with", “include”, “including", “contain", “containing”, are synonyms and are inclusive or open terms that indicate the presence of what follows, and which do not exclude or prevent the presence of other components, characteristics, elements, members, steps, as known from or disclosed in the prior art.
Quoting numerical intervals by endpoints comprises all integers, fractions and/or real numbers between the endpoints, these endpoints included.
In the context of this document, a neural network refers to an artificial neural net- work, wherein the neural network includes inputs, nodes, called neurons, and out- puts. An input is connected to one or more neurons. An output is also connected to one or more neurons. A neuron can be connected to one or more neurons. A neural network can comprise one or more layers of neurons between an input and an out- put. Each neuron and each connection of a neural network typically has a weight that is adjusted during a training phase using a training set of sample data.
In a first aspect, the invention relates to a method for controlling an unmanned autonomous vehicle based on a soil type on a terrain.
According to a preferred embodiment, the unmanned autonomous vehicle comprises a drive unit for moving the unmanned vehicle over the terrain, a camera for captur- ing images of the terrain, a positioning means for determining a position of the unmanned vehicle on the terrain, a processor and memory and a tool.
The drive unit preferably comprises at least one wheel and a motor for driving the wheel. Preferably, the motor is an electric motor. Preferably, the unmanned auton- omous vehicle comprises a battery for powering the motor and other electrical sys- tems. It will be apparent to one skilled in the art that the unmanned autonomous vehicle may comprise two, three, four or more wheels, wherein at least one wheel, preferably at least two wheels, are coupled to the motor for driving. I t will be ap- parent to one skilled in the art that the at least one wheel can be part of a caterpillar track, the caterpillar track being drivable by the motor by means of the at least one wheel. The unmanned autonomous vehicle comprises a steering device for steering the unmanned autonomous vehicle. The steering device is a conventional steering device in which at least one wheel is rotatably arranged. Alternatively, the steering device is part of the drive unit, wherein two wheels on opposite sides of the un- manned autonomous vehicle can be driven differently by the motor. Differently means with a different speed and/or opposite direction of rotation. The steering device may or may not be part of the drive unit.
The camera is a digital camera. The camera is at least suitable for taking two-di- mensional images. Optionally, the camera is suitable for taking three-dimensional images, with or without depth determination. The camera has a known viewing an- gle. The camera has a known position and alignment on the unmanned autonomous vehicle. The camera is positioned in such a way that at least part of the soil of the terrain is captured in the image. Because the viewing angle of the camera and the position and alignment of the camera on the unmanned autonomous vehicle are known, a position, relative to the position of the unmanned vehicle, of a soil type of the terrain that is visible on an image captured by the digital camera, is known. The camera has a fixed position and alignment on the unmanned autonomous vehicle. Alternatively, the camera is rotatably arranged, the camera being 360° rotatable in a horizontal plane and 180° rotatable in a vertical plane. The rotatable arrangement of the camera is preferably drivably coupled to motors with encoders. Motors with encoders are advantageous for knowing the position and alignment of a rotatably
mounted camera. Optionally, the camera is also suitable for capturing images with non-visible light, such as infrared light or ultraviolet light. This is advantageous be- cause it allows images of the terrain to be captured with visible light, infrared light, and ultraviolet light, from which different information can be obtained, which can be advantageously combined for a successful classification of a soil type of the ter- rain visible in a captured image. It will be apparent to one skilled in the art that instead of a single camera, various cameras can also be combined, for example, wherein a first camera captures images using visible light, a second camera captures images using infrared light, and a third camera captures images using ultraviolet light. Preferably, the first camera, the second camera, and the third camera have an overlapping field of view. This is advantageous for combining information from images captured using the first camera, the second camera, and the third camera. It will be apparent to one skilled in the art that the unmanned autonomous vehicle can comprise several similar cameras.
The positioning means for determ ining the position of the unmanned vehicle may be any suitable means. The positioning means is, for example, a Global Navigation Satellite System (GNSS) , such as GPS, GLONASS or Galileo. The positioning means is, for example, a system with wireless beacons on the terrain, whereby the un- manned autonomous vehicle determines a position on the terrain by triangulation. The positioning means is, for example, based on recognition of reference points in images of the terrain, for example images made with the aid of the camera of the unmanned autonomous vehicle. Knowing the viewing angle of the camera and the position and the alignment of the camera on the unmanned autonomous vehicle, it is possible by means of trigonometry and/or photogrammetry to automatically es- timate a distance from a reference point in an image to the camera and the un- manned autonomous vehicle, a distance between two reference points in an image and/or a dimension of a reference point in an image, even if the camera is only suitable for taking two-dimensional images, so that the position of the unmanned autonomous vehicle on the terrain can be determ ined.
The memory comprises a working memory and non-volatile memory for storing, for example, data, such as images captured by the camera, and, for example, a pro- gram executed by the processor, wherein the method is executed when the program is executed.
The tool is any tool suitable for perform ing a task on the terrain. Non-limiting ex- amples of suitable tools are a lawnmower, a vacuum cleaner, a brush, a spray lance, pruning shears, etc.
The method comprises the steps of:
- Determining a position of the unmanned autonomous vehicle on the terrain using the positioning means. The terrain can be indoors, for example a factory hall or a function room , or outdoors, for example a garden, a soccer field, or a square.
- Capturing at least one image of the terrain using the unmanned autonomous vehicle's camera. The at least one image may or may not be stored in the memory of the unmanned autonomous vehicle. The at least one image may or may not be transmitted by the unmanned autonomous vehicle to an external processing unit. It will be apparent to one skilled in the art that if the unmanned autonomous vehicle moves to another position on the terrain, at least one image of the terrain is captured again using the camera of the unmanned autonomous vehicle. Preferably, during movements by the unmanned autonomous vehicle along a trajectory from a first position to a second position, recordings are made at regular intervals. The images are captured at an adj usted rate, expressed in frames per second, so that consecutive images overlap at least partially. It will be apparent to one skilled in the art that the speed of capturing images depends on the speed at which the unmanned autonomous vehicle moves on the terrain.
■ Processing at least one image of the terrain by the processor. If the camera has captured multiple images, preferably the m ultiple images are processed. The un- manned autonomous vehicle comprises at least one global neural network and at least one local neural network.
A global neural network has been trained using a global training set. A global training set comprises at least 70% images of terrains other than the terrain where the unmanned autonomous vehicle is located. Preferably, a global training set comprises at least 80% images of other terrains, more preferably at least 90% , even more preferably at least 95% , and even more preferably at least 99.99% . A global training set comprises images of many terrains to obtain a global neural network that can be used as a standard for a multitude of different terrains. A global neural network is preferably trained on an external processing unit and loaded into the memory of the unmanned autonomous vehicle. It will be
apparent to one skilled in the art that if the unmanned autonomous vehicle com- prises several global neural networks, the global neural networks were preferably trained with different global training sets. This does not mean that different global training sets cannot have images in com mon. Having at least one global neural network is advantageous because it allows the unmanned vehicle to be deployed immediately, without a user having to first create a training set of images of the entire terrain and train a neural network. This would require a lot of time for the user to capture a large number of images of the terrain and to annotate all of these images. This is also expensive for a user because a lot of memory storage is required to save the training set.
A local neural network has been trained using a local training set. A local training set comprises only images of the terrain where the unmanned autonomous ve- hicle is located. The images of a local training set were created using a separate digital camera device, a GSM device with a camera, a tablet with a camera, the camera of the unmanned autonomous vehicle, or another suitable camera, and added to the mentioned local training set. A local neural network is trained on an external processing unit and loaded into the memory of the unmanned autono- mous vehicle. Alternatively, a local neural network is trained on the processor of the unmanned autonomous vehicle. In that case, a local training set is also avail- able in the memory of the unmanned autonomous vehicle.
Based on the position of the unmanned autonomous vehicle determ ined in an earlier step of the method, one or more neural networks from a group formed by the at least one global neural network and the at least one local neural network are selected. I n concrete terms, this means that one or more neural networks are selected from all global neural networks and all local neural networks com- prised in the unmanned autonomous vehicle.
During processing of the at least one image, each of the selected neural networks determines at least one classification of at least one soil type in the at least one image. A classification can be binary, for example a soil type is grass (1) or is not grass (0) , or can also be a value that represents a probability, for example a soil type with 83% probability of grass. In the context of this document, a positive classification refers to a classification with a probability greater than 60% , pref- erably greater than 75% , more preferably greater than 90% , and even more preferably greater than 98% . It will be apparent to one skilled in the art that in a binary system , a value of 1 is a positive classification and a value of 0 is not a
positive classification. A selected neural network can be trained for one or more classifications. Non-limiting examples of classifications are grass, gravel, stone floor, soil, flower bed, leaves, parquet, vegetable garden, etc. Selected neural networks can be trained for different classifications, equal classifications, or for only partially different or equal classifications. An image can comprise a single soil type. An image can comprise several soil types, which are separated from each other, which run through each other or a combination of both.
The at least one local neural network is advantageous to obtain a better classifi- cation in specific cases where determining at least one classification of at least one soil type in an image by the global network results in incorrect classification, using the at least one local neural network. The at least one local neural network is additionally advantageous because only a lower number of images in the local training set is required to achieve a better classification in those specific cases, compared to retraining the at least one global neural network by adding images from the terrain to the global training set. The latter solution would require an enormous number of images to have any impact on the training of the at least one global neural network for those specific cases, since the global training set already contains a plethora of images from a large number of different terrains. It is particularly advantageous that one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network are selected based on the position of the unmanned autonomous vehicle. As a result, a user does not have to train a local neural network for the entire terrain, but only for those parts where a global neural network produces a mis- classification, making capturing the images for a local training set and training a local neural network only requires limited time and memory. Because several global neural networks and/or several local neural networks can be selected, it is possible to make an optimal selection according to the desired classifications in a part of the terrain. For example, a large part of a garden could be a lawn. A correct classification of grass is determined everywhere in the garden for images of the lawn, except around a tree. There, the grass is much sparser due to shade and the tree absorbing a lot of water, and there is soil visible through the grass, which is why a global neural network does not classify it as grass. The unmanned autonomous vehicle would therefore, for example, not mowing the grass at those positions. If the global neural network now needs to be retrained to also consider this part of the garden as grass, a huge amount of images of the lawn around the tree must be added to the global training set, which is practically not feasible. Additionally, there is the real risk that this will cause a flower bed with some wild
grass at the edge of the lawn to be classified as grass, which would result in the unmanned autonomous vehicle also mowing the flower bed, which is not desired. By training a local neural network with a limited set of images and selecting this local neural network only around the tree, the sparse grass around the tree is properly mowed. If there is also a flower bed in the vicinity of the tree, for ex- ample, a second local neural network can be trained with images of the flower bed, and both local neural networks can be selected in the vicinity of the tree to successfully distinguish between the sparse grass and the flower bed around the tree.
- Creating at least one control signal by the processor for the drive unit and/or tool. The at least one control signal is created based on the classifications deter- mined by the selected neural networks. The at least one control signal determ ines which action the drive unit and/or the tool perform . For example, the at least one control signal determines how the unmanned autonomous vehicle moves or which tool is used. This is advantageous because a user of an unmanned auton- omous vehicle does not have to determine for every position on the terrain what action the unmanned autonomous vehicle will perform . I t is sufficient to deter- m ine which action should be performed for a specific soil type. For example, if a stone floor is determined as a positive classification for the soil type, the at least one control signal determines that the unmanned autonomous vehicle continues to drive over the stone floor and sweep the stone floor. Other examples include mowing grass if grass is determ ined as a positive classification for the soil type, watering a flower bed if flowers are determined as a positive classification for the soil type, sweeping up leaves if fallen leaves are determined as a positive classi- fication for the soil type, etc. If an image comprises multiple soil types that are separated from each other, the at least one control signal is preferably created based on a classification for a soil type im mediately adjacent to the unmanned autonomous vehicle, preferably in a driving direction of the unmanned autono- mous vehicle. If an image comprises multiple soil types, which run together, the at least one control signal is preferably created based on m ultiple classifications for the soil types immediately adjacent to the unmanned autonomous vehicle, preferably in a driving direction of the unmanned autonomous vehicle. For ex- ample, a histogram of soil type classifications is created for an area im mediately adjacent to the unmanned autonomous vehicle, wherein the at least one control signal is created based on a classification with a highest value in the histogram .
According to a preferred embodiment, the at least one control signal is created ac- cording to predetermined rules based on the classifications determined by the se- lected neural networks. At least two different sets of predetermined rules are de- fined. A set of predeterm ined rules is selected depending on the position of the unmanned autonomous vehicle.
This embodiment is advantageous for determining other actions by the unmanned autonomous vehicle, depending on the position of the unmanned autonomous ve- hicle on the terrain. For example, fallen leaves can be on a lawn, making it possible that on certain parts of the lawn no grass is visible anymore. The unmanned auton- -mous vehicle could cut the grass even further, at least partially shredding the fallen leaves. At the edge of the lawn, for example, a curbstone may be hidden under the fallen leaves. If the unmanned autonomous vehicle continues to mow the grass here, it is very likely that the tool of the unmanned vehicle for mowing the grass will be damaged by the curbstone. So here it is not desirable to cut the grass further. Based on the classifications determined by the selected neural networks, it is not possible to distinguish between both positions. For both positions in the terrain, it is possible that only fallen leaves were determ ined as positive classification for the soil type. By now selecting a first set of rules for the lawn, which says, for example, that both grass and fallen leaves may be mowed, and by selecting a second set of rules near the edge of the lawn, which says, for example, that only grass is allowed to be mowed, and not in case of fallen leaves, it can be avoided that a control signal is created that causes the unmanned autonomous vehicle to mow over the curb- stone and damage the mowing tool. This embodiment is also advantageous if an image comprises several soil types that run together. The least one control signal is created based on multiple positive classifications for the soil types and based on the selected set of rules. For example, a rule may specify that if grass, soil, and weeds have been determ ined as positive classifications for soil types, then mowing will take place at these positions. This is particularly advantageous compared to a con- trol signal created based on a classification with the highest value in a histogram , as in a previously described embodiment. For example, in a severe drought, it is possible that grass is m uch sparser than usual and there is more soil and weeds, so that, for example, soil is the highest value in the histogram , and a decision could be made not to mow, while the unmanned autonomous vehicle is actually on a lawn. An advanced possibility is that the at least one control signal is created based on m ultiple positive classifications for the soil types and based on the selected set of rules, wherein the selected set of rules comprises a threshold value for a surface area of at least one soil type with a positive classification. In the example above
where grass, soil and weeds were determ ined as positive classifications for soil types, the selected set of rules may comprise a minimum value for the surface area of the grass. This threshold value can be expressed as an absolute value, for exam- ple, an area in m2 or a number of pixels in a camera image, or as a relative value, for example, a percentage of a total area in m2 or in pixels in a camera image or a share in a histogram . In the above example, the selected set of rules could comprise a threshold value of, for example, at least 50% grass or at least 3 m2 of grass to be allowed to mow at that position. The selected set of rules could comprise a threshold value that, for example, allows a maximum of 30% soil or a maxim um of 0.5 m2 soil to be allowed to mow at that position. It is also clear from this example that threshold values can be logically combined.
According to a further embodiment, a specific selection of a set of predetermined rules is made for each control signal based on the position of the unmanned auton- omous vehicle.
This embodiment is advantageous because it allows a new set of predetermined rules to be selected at a certain position for a first control signal, while for a second control signal, a previously selected set of rules can be retained or, for example, another set of predetermined rules can be selected. For example, a new set of pre- determined rules could be selected for a control signal for the drive unit at a position where the terrain is often wet, where the speed of the unmanned autonomous ve- hicle is reduced if soil has been determined as positive classifications for the soil type, to prevent wheels of the drive unit from slipping and damaging the terrain, while the control signal for a tool for picking up fallen leaves still uses the same collection of predetermined rules.
According to an embodiment, the one or more neural networks from the group formed by the at least one global neural network and the at least one local neural network, in addition to the position of the unmanned autonomous vehicle, are also selected on the basis of time. For example, time can be a time of a day, a day of a week, a month, or a season. This embodiment is advantageous because a terrain can have a different appearance during a day, week, month, or season, making it difficult to correctly classify soil types. For example, the area may be in the shade at dusk, certain types of flowers may only be present in the spring, etc. By selecting other neural networks, the chosen neural networks can be optimized for a time. This embodiment is applicable to both global neural networks and local neural networks.
According to an embodiment, the set of predetermined rules is also selected based on time, in addition to the position of the unmanned autonomous vehicle. For ex- ample, time can be a time of a day, a day of a week, a month, or a season. This embodiment is advantageous because during a day, week, month or season, other tasks on a terrain are necessary. For example, certain plants are preferably sprayed in the morning or evening. For example, drought in seasons outside of sum mer is less of a problem than in summer. In the example described above of mowing grass, a different set of rules would be selected outside of summer so that if both grass, soil and weeds were determined as positive classifications for soil types, these po- sitions would not be mowed instead of mowed, because it is then unlikely that the unmanned autonomous vehicle is still on the lawn.
According to a preferred embodiment, a specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network is made for each control signal based on the position of the unmanned autonomous vehicle.
This embodiment is advantageous because as a result a new selection of neural networks can be made at a certain position for a first control signal, while for a second control signal a selection of neural networks that has already been made can be retained or, for example, another selection of neural networks can be made.
For example, for a control signal for a mowing tool at a position of the terrain where there are flowerbeds in the grass, a new selection of neural networks could be made, for example, where a local neural network is added to the selection of neural net- works in order to classify the flowerbeds as soil type, to prevent the mowing tool from mowing the flowerbeds, while the drive control signal still uses the same se- lection of neural networks.
According to a preferred embodiment, a first group of zones are defined on a digital map of the terrain. Preferably, the digital map is presented visually in a graphical application, wherein the first group of zones are drawn graphically on the digital map. The graphical application is preferably suitable for use on a smartphone and/or a tablet and/or a computer. Preferably, the graphical application is suitable for use in a web browser. A specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network is associated with each zone of the first group of zones. Preferably, the association also takes place in the aforementioned graphical application. The digital
map of the terrain comprises the first group of zones and the associated selections of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network. The digital map is loaded into the unmanned autonomous vehicle. The digital map is loaded into the unmanned vehicle using a data cable. A non-limiting example is a USB cable. Alter- natively, the digital map is loaded into the unmanned vehicle over a wireless con- nection. Non-lim iting examples are a Bluetooth connection or a WiFi connection. Based on the position of the unmanned autonomous vehicle on the terrain, the un- manned autonomous vehicle determines the zone from the first group of zones in which the unmanned autonomous vehicle is located. The unmanned autonomous vehicle uses the digital map for this. The unmanned autonomous vehicle uses the therewith associated selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network while processing the at least one image of the terrain.
This embodiment is advantageous because a user can simply define on a digital map a first group of zones on the terrain where, for example, using a standard global neural network is not sufficient for a correct classification and where a different selection of neural networks is required. Due to the use of a digital map, this does not require the intervention of, for example, a technician. The user only has to associate an appropriate selection of neural networks with a zone from the first group of zones. This can be done easily in a graphical application, for example.
According to a preferred embodiment, a second group of zones is defined on a digital map of the terrain. The digital map may or may not be equal to the digital map in a previously described embodiment on which a first group of zones is defined. The digital map can preferably be represented in the same way as in the previously described embodiment, and the second group of zones can preferably be defined in the same way. Each zone of the second group of zones is associated with a specific selection of a set of predetermined rules. Preferably, the association also takes place in a graphical application, as described in the aforementioned embodiment. The digital map comprises the zones from the second group of zones and the associated selections of sets of predetermined rules. The digital map is loaded into the un- manned autonomous vehicle. Preferably, the digital map is loaded into the un- manned autonomous vehicle in the same manner as described in the aforemen- tioned embodiment. The unmanned autonomous vehicle determines the zone from the second group of zones where the unmanned autonomous vehicle is located based on the position of the unmanned autonomous vehicle on the terrain. The
unmanned autonomous vehicle uses the therewith associated set of predetermined rules while creating the at least one control signal.
This embodiment is advantageous because a user can easily define a second group of zones on the terrain on a digital map where, for example, a different set of pre- determined rules is required, for example because a different action by the un- manned autonomous vehicle is desired in case of equal positive classifications of a soil type. Due to the use of a digital map, this does not require the intervention of, for example, a technician. The user only needs to associate a suitable set of prede- termined rules with a zone from the second group of zones. This can be done easily in a graphical application, for example.
According to a further embodiment, the first group of zones is equal to the second group of zones. This is advantageous because it makes it clear to a user which selection of neural networks and which set of predetermined rules will be used by the unmanned autonomous vehicle at a position in the terrain. The zones only need to be defined once.
According to a preferred embodiment, zones are defined hierarchically. Associated selections of neural networks and/or associated sets of predeterm ined rules from a hierarchically higher zone are automatically associated with a hierarchically lower zone. Associated selections of neural networks and/or associated sets of predeter- m ined rules from a hierarchically lower zone are not associated with a hierarchically higher zone. Preferably, an associated selection of neural networks and/or associ- ated set of predetermined rules of a lower hierarchical zone for a specific control signal takes precedence over an associated selection of neural networks and/or as- sociated set of predetermined rules of a higher hierarchical zone.
This embodiment is advantageous for easily defining smaller zones where at least one control signal has to be created in a different way, for example because a dif- ferent or an additional action has to be performed by the unmanned autonomous vehicle or, for example, because a different selection of neural networks is required to obtain correct classifications for a certain action, so that a correct control signal is effectively created for the said action. An action can, for example, be mowing the grass, but it can also be stopping the mowing of grass. For other control signals, the associated selection of neural networks and/or the associated set of predeter- m ined rules of a hierarchically higher zone can still be used. As a result, it is not necessary to divide the entire terrain into many smaller zones and to associate a
selection of neural networks and/or a set of predetermined rules with all zones for all possible control signals.
According to a preferred embodiment, images for a local training set are captured using the camera of the unmanned autonomous vehicle. This embodiment is advan- tageous because no additional camera is needed to capture images for the local training set. This embodiment is additionally advantageous because images for the local training set are captured in the same orientation relative to the unmanned autonomous vehicle as when using the unmanned autonomous vehicle for perform- ing tasks. This embodiment is particularly advantageous because the images for the local training set can be captured during the use of the unmanned autonomous vehicle for performing tasks or during an autonomous exploration of at least a part of the terrain.
According to a further embodiment, the images of the local training set are pro- cessed for training a local neural network using the processor of the unmanned autonomous vehicle. The images of the local training set are stored in the memory of the unmanned autonomous vehicle for this purpose. The images are labeled using an external device, such as a smartphone, a tablet or a computer, or on a screen comprised in the unmanned autonomous vehicle. In case the images are labeled on an external device, the images are first sent from the unmanned vehicle over a wired or wireless connection to the external device, after which the labels are sent back to the unmanned autonomous vehicle. Non-limiting examples of suitable wired or wireless connections are provided in previously described embodiments. The la- beled images are processed by the processor to train the local neural network.
This embodiment is advantageous because no capacity on external processing units, such as a server in a cloud environment, needs to be reserved to train a local neural network.
According to a preferred embodiment, the images of the local training set, captured using the camera of the unmanned autonomous vehicle, are forwarded to an exter- nal processing unit and added to one or more global training sets. The images can be labeled before or after. Preferably, the images are labeled afterwards, preferably by a technician. Optionally, a selection of images from the local training set is added to one or more global training sets. The selection of images from the local training set does not have to be the same for each global training set. After adding the images of the local training set to one or more global training sets, the one or more
global training sets are processed by the external processing unit for training one or more global neural networks. This embodiment is advantageous for incrementally obtaining large global training sets and for incrementally improving global neural networks. This is particularly advantageous because local training sets are often created in cases where global neural networks do not classify soil types accurately enough. By incorporating images from local training sets into global training sets, future users of other unmanned autonomous vehicles can avoid sim ilar problems.
According to a preferred embodiment, the unmanned autonomous vehicle deter- m ines its position using a digital map of the terrain and based on images of the terrain, which are captured using the camera of the unmanned autonomous vehicle. Reference points in the images are compared with reference points on the digital map. This embodiment is advantageous because an unmanned autonomous vehicle only needs a camera for both capturing images for classification of soil types and determining the position of the unmanned autonomous vehicle on the terrain.
According to a preferred embodiment, at least one of the selected neural networks determines at least one classification of at least one object in the at least one image. This can be both a global neural network and/or a local neural network. The at least one control signal is created based on the classifications for soil types and objects determined by the selected neural networks. Previously described embodiments are also applicable, mutatis mutandis, for training a local or neural network or a global neural network for classifying an object. This embodiment is advantageous if a task to be performed by the unmanned autonomous vehicle depends not only on a soil type, but also on the presence or absence of an object, for example the presence of a charging station for charging a battery of the unmanned autonomous vehicle or availability of a waste container for dumping household or garden waste, for exam- ple.
According to a preferred embodiment, a weight is assigned to each of the classifi- cations determined by the selected neural networks. The weight depends on the position of the unmanned autonomous vehicle. Weights are advantageous, for ex- ample, for use as threshold values in predetermined rules. Weights are advanta- geous, for example, if an image comprises multiple soil types which run together, to give more or less weight to a certain positive classification. Weights are particu- larly advantageous for use at zone edges. Weights that gradually change as they approach the edge of a zone can provide a smooth transition from a first zone to a second zone, for example by gradually transitioning weights corresponding to a first
predetermined rule in a first zone to weights corresponding to a second predeter- mined rule in a second zone, or for example by gradually transitioning weights given to certain positive classifications in a first zone to weights given to the same positive classifications in a second zone.
According to a preferred embodiment, it is determ ined whether a soil type belongs to a set of predetermined soil types based on the classifications determined by the selected neural networks. The unmanned autonomous vehicle only moves across soil types from the set of predetermined soil types.
This embodiment is advantageous because it allows the unmanned autonomous ve- hicle to move autonomously over a terrain, wherein the unmanned autonomous vehicle remains within a perimeter on the terrain. The perimeter is determined by a transition between soil types that belong to the set of predetermined soil types and soil types that do not belong to the set of predeterm ined soil types. The un- manned autonomous vehicle cannot cross this transition. For example, grass be- longs to the set of predetermined soil types, while flower beds, soil, and terrace are not part of the set of predetermined soil types. As a result, the unmanned autono- mous vehicle remains on a lawn that is bordered by flower beds and a terrace. It is therefore not necessary to create physical boundaries around the lawn or to place or bury a signal wire around part of the terrain.
According to a further embodiment, the set of predetermined soil types depends on the position of the unmanned autonomous vehicle on the terrain. This embodiment is particularly advantageous for creating corridors for the unmanned vehicle be- tween different zones on the terrain. For example, there are two tiled terraces on the terrain that are separated from each other by a gravel path. The unmanned autonomous vehicle comprises, for example, a brush as a tool for brushing the tiled terraces. The predeterm ined set of soil types consists of tiled terraces, so the un- manned vehicle will only move across the tiled terraces. This does mean that the unmanned autonomous vehicle will only brush one tiled terrace, because the un- manned vehicle cannot cross the gravel path to move to the other tiled terrace. Depending on the position of the unmanned autonomous vehicle on the terrain, a gravel path can be added as a soil type to the set of predeterm ined soil types at the level of the desired corridor, allowing the unmanned autonomous vehicle to move across the gravel path to the other tiled terrace at the level of the corridor. At posi- tions outside the corridor, a gravel path has not been added to the set of predeter- mined soil types, causing the unmanned autonomous vehicle not to enter the gravel
path there. It is clear that in this example, based on the position of the unmanned autonomous vehicle, it can be ensured that a control signal is created for the brush so that the unmanned autonomous vehicle does not brush the gravel path.
In a second aspect, the invention relates to an unmanned autonomous vehicle for performing tasks on a terrain.
According to a preferred embodiment, the unmanned autonomous vehicle comprises a drive unit for moving the unmanned vehicle over the terrain, a camera for captur- ing images of the terrain, a positioning means for determining a position of the unmanned vehicle on the terrain and a tool.
The drive unit preferably comprises at least one wheel and a motor for driving the wheel. Preferably, the motor is an electric motor. Preferably, the unmanned auton- omous vehicle comprises a battery for powering the motor and other electrical sys- tems. It will be apparent to one skilled in the art that the unmanned autonomous vehicle can comprise two, three, four, or more wheels, wherein at least one wheel, preferably at least two wheels, are coupled to the motor for driving. It will be ap- parent to one skilled in the art that the at least one wheel can be part of a caterpillar track, the caterpillar track being drivable by the motor by means of the at least one wheel. The unmanned autonomous vehicle comprises a steering device for steering the unmanned autonomous vehicle. The steering device is a conventional steering device in which at least one wheel is rotatably arranged. Alternatively, the steering device is part of the drive unit, wherein two wheels on opposite sides of the un- -anned autonomous vehicle can be driven differently by the motor. Differently means with a different speed and/or opposite direction of rotation. The steering device may or may not be part of the drive unit.
The camera is a digital camera. The camera is at least suitable for taking two-di- mensional images. Optionally, the camera is suitable for taking three-dimensional images, with or without depth determination. The camera has a known viewing an- gle. The camera has a known position and alignment on the unmanned autonomous vehicle. The camera is positioned in such a way that at least part of the soil of the terrain is captured in the image. Because the viewing angle of the camera and the position and alignment of the camera on the unmanned autonomous vehicle are known, a position, relative to the position of the unmanned vehicle, of a soil type of the terrain that is visible on an image captured by the digital camera, is known. The camera has a fixed position and alignment on the unmanned autonomous vehicle.
Alternatively , the camera is rotatably arranged, the camera being 360° rotatable in a horizontal plane and 180° rotatable in a vertical plane. The rotatable arrangement of the camera is preferably drivably coupled to motors with encoders. Motors with encoders are advantageous for knowing the position and alignment of a rotatably mounted camera. Optionally, the camera is also suitable for capturing images with non-visible light, such as infrared light or ultraviolet light. This is advantageous be- cause it allows images of the terrain to be captured with visible light, infrared light, and ultraviolet light, from which different information can be obtained. It will be apparent to one skilled in the art that instead of a single camera, various cameras can also be combined, for example, wherein a first camera captures images using visible light, a second camera captures images using infrared light, and a third cam- era captures images using ultraviolet light. Preferably, the first camera, the second camera, and the third camera have an overlapping field of view. This is advanta- geous for combining information from images captured using the first camera, the second camera, and the third camera. It will be apparent to one skilled in the art that the unmanned autonomous vehicle can comprise several sim ilar cameras.
The positioning means for determ ining the position of the unmanned vehicle may be any suitable means. The positioning means is, for example, a Global Navigation Satellite System (GNSS) , such as GPS, GLONASS or Galileo. The positioning means is, for example, a system with wireless beacons on the terrain, whereby the un- manned autonomous vehicle determines a position on the terrain by triangulation. The positioning means is, for example, based on recognition of reference points in images of the terrain, for example images made with the aid of the camera of the unmanned autonomous vehicle. Knowing the viewing angle of the camera and the position and the alignment of the camera on the unmanned autonomous vehicle, it is possible by means of trigonometry and/or photogrammetry to automatically es- timate a distance from a reference point in an image to the camera and the un- manned autonomous vehicle, a distance between two reference points in an image and/or a dimension of a reference point in an image, even if the camera is only suitable for taking two-dimensional images, so that the position of the unmanned autonomous vehicle on the terrain can be determined.
The tool is any tool suitable for perform ing a task on the terrain. Non-limiting ex- amples of suitable tools are a lawnmower, a vacuum cleaner, a brush, a spray lance, pruning shears, etc.
The unmanned autonomous vehicle comprises a memory and a processor. The pro- cessor is configured to perform a method according to the first aspect. The memory comprises a working memory and a non-volatile memory.
Such an unmanned autonomous vehicle is advantageous because it can be imme- diately deployed by a user of the unmanned autonomous vehicle on a terrain for performing tasks without additional training of a neural network, while a local neural network can be trained with lim ited user effort, so that incorrect classifications in a part of the terrain can be easily corrected.
According to a preferred embodiment, the camera of the unmanned vehicle is only suitable for taking two-dimensional images. Preferably, only one camera for two- dimensional images is mounted on the unmanned autonomous vehicle.
This embodiment is particularly advantageous because it results in a very simple unmanned autonomous vehicle for performing tasks on a terrain.
One skilled in the art will appreciate that a method according to the first aspect is preferably performed with an unmanned autonomous vehicle according to the sec- ond aspect and that an unmanned autonomous vehicle according to the second aspect is preferably configured for performing a method according to the first as- pect. Each feature described in this document, both above and below, can therefore relate to any of the three aspects of the present invention.
In a third aspect, the invention relates to a use of a method according to the first aspect and/or an unmanned autonomous vehicle according to the second aspect for autonomously maintaining a garden.
This use results in an advantageous autonomous maintenance of a garden using an unmanned autonomous vehicle because the unmanned autonomous vehicle can be immediately used by a user for garden maintenance, without the user having to train a neural network of the unmanned autonomous vehicle and because the user can train a local network with very lim ited effort, if the unmanned autonomous ve- hicle performs an unwanted action or fails to perform a desired action due to incor- rect classifications in a part of the garden, to correct the incorrect classifications.
In what follows, the Invention is described by way of non-limiting figures illustrating the invention, and which are not intended to and should not be interpreted as lim- iting the scope of the invention.
DESCRIPTION OF THE FlGURES
Figure 1 shows a schematic representation of a terrain, indicating different zones.
The terrain comprises seven zones. The zones are arranged hierarchically in two layers. A highest hierarchical layer comprises a first zone (1 ) , which is shaded in Figure 1 , and within zone (1 ) three adjacent zones, namely a second zone (2) , a third zone (3) and a fourth zone (4). A lowest hierarchical layer comprises three zones, namely a fifth zone (5) located within the second zone (2) , a sixth zone (6) located within the third zone (3) and a seventh zone (7) located within the fourth zone (4) .
In this example, the terrain is a garden, wherein the first zone (1 ) is for example a border with bushes, the second zone (2) is a lawn, the third zone (3) is a lawn with a concrete path and the fourth zone (4) is again a lawn. A charging station is in- stalled in zone (5) . A compost bin has been placed in zone (6) . The zone (7) is a sandbox.
Figure 2 shows a schematic representation of global neural networks and local neural networks contained in an unmanned autonomous vehicle, according to an embodiment of the present invention.
The unmanned autonomous vehicle in this example has two global neural networks (A) and (B) and two local neural networks (C) and (D).
The global neural network (A) is trained to determine six classifications (a) , (b) , (c) , (d), (e) and (f) . I n this example, classification (a) is grass, classification (b) clover, classification (c) soil, classification (d) dandelion, classification (e) bark, and classi- fication (f) a charging station.
The global neural network (B) is trained to determine five classifications (f), (g), (h) , (i) and (j) . The classification (f) is again a charging station, but the global neural network (B) is trained with a different global training set in this example, which means that the global neural networks (A) and (B) can obtain different results for
classification (f) . Classification (g) is a person, classification (h) is a tree, classifica- tion (i) is a car, and classification (j) is a plant. The global neural network (B) in this example is specifically trained to determ ine object classifications rather than soil type classifications.
The local neural network (C) is trained to determ ine two specific classifications for the specific terrain from this example. Classification (k) is a compost bin and clas- sification (I) is a bucket. The local neural network (D) is trained to determ ine one specific classification for the specific terrain in this example. Classification (m) is sand.
Figure 3 shows a schematic representation of a selection of neural networks and a set of rules according to a position of an unmanned autonomous vehicle, according to an embodiment of the present invention, in a terrain.
The terrain is the terrain from the example in Figure 1 . The neural networks are the global neural networks (A) and (B) and the local neural networks (C) and (D) from Figure 2.
I n this example, at the highest hierarchical level of zones, a single action (A1) is defined, namely mowing grass. The global neural network (A) is always associated with all zones of the highest hierarchical level (1), (2) , (3) and (4). Zone (1) is associated with a set of predetermined rules (C1) that stipulate that the grass is only mowed if there are positive classifications for grass (a) , for dandelions (d) and for bark (e) . This means that the unmanned autonomous vehicle is located in the border between the bushes, where there is tree bark and where wild grass and dandelions grow, which may be mowed. The lawn in zone (2) has been turned over and seeded with clover j ust before redesigning. Zone (2) is associated with a set of predeterm ined rules (C2) that stipulate that the grass is only mowed if there are positive classifications for clover (b) and for soil (c). This means that the unmanned autonomous vehicle is on the turned over lawn, where any grass and clover may be mowed. Optionally, the set of predetermined rules (C2) may comprise a threshold value for soil (c), whereby at a position in zone (2) mowing may only take place if an area of the soil (c) is lower than the threshold value. This threshold value can be defined as a relative or an absolute value. In zone (3) there is a concrete path. Zone (3) is associated with a set of predetermined rules (C3) that stipulate that the grass is only mowed if there is a positive classification for grass (a). This means that the unmanned autonomous vehicle is definitely on grass and not on the concrete path,
which prevents the unmanned vehicie from being damaged by mowing on the con- crete path. Zone (4) is associated with a set of predetermined rules (C4) that de- termine that the grass is only mowed if there are positive classifications for grass (a) and dandelions (d) . The intention is that the grass in zone (4) is a bit longer and wilder. Mowing the grass only where dandelions are growing allows the grass to grow until dandelions could grow.
In this example, three actions (A2) , (A3) and (A4) are defined at the lowest hierar- chical level of zones, namely charging a battery of the unmanned autonomous ve- hicle (A2) , dumping garden waste into a compost bin (A3) and sweeping a sandbox (A4) . Zone (5) is associated with both the global neural network (A) and the global neural network (B). Zone (5) is associated with a set of predetermined rules (C5) that determine that the battery of the unmanned autonomous vehicie may be charged if there is a positive classification for a charging station (f) , both by the global neural network (A) and the global neural network (B) . Because the global neural network (A) and the global neural network (B) are trained with a different global training set, a reliable classification (f) is determined in this way. Zone (6) is associated with both the global neural network (A) and the local neural network (C). Zone (6) is associated with a set of predetermined rules (C6), which determ ine that garden waste may be dumped into the compost bin if there are positive classifica- tions for grass (a) , soil (c) and dandelion (d) by the global neural network (A) and a positive classification for a compost bin (k) and not for a bucket (I) by the local neural network (C). The compost bin is placed in zone (6) in the grass next to the concrete path. Due to the placement of the compost bin, the grass around the com- post bin is mowed less often, resulting in dandelions and the grass around the com- post bin disappearing in some places. By using the local neural network (C) , it is possible to obtain a positive classification (k) for the specific compost bin in the garden of the example, even if the compost bin is moved and the compost bin is not confused with a specific bucket that is also often used in the garden. Only the local neural network (D) is associated with zone (7) . Zone (7) is associated with a set of predetermined rules (C6) which stipulate that the sandbox may be swept if there is a positive classification for sand (m).
Claims
1 , Method for controlling an unmanned autonomous vehicle based on a soil type on a terrain, wherein the unmanned autonomous vehicle comprises a drive unit for moving the unmanned vehicle across the terrain, a camera for cap- turing images of the terrain, a positioning means for determ ining a position of the unmanned vehicle on the terrain, a processor and memory, and a tool, comprising the steps of:
- determining a position of the unmanned autonomous vehicle on the ter- rain using the positioning means;
- capturing at least one image of the terrain using the unmanned autono- mous vehicle's camera;
- processing at least one image of the terrain by the processor;
- creating at least one control signal by the processor for the drive unit and/or tool; characterized in, that the unmanned vehicle comprises at least one global neural network and one local neural network, wherein a global neural net- work is trained using a global training set and a local neural network is trained using a local training set, wherein a local training set only comprises images of the terrain and wherein a global training set comprises at least 70% images from other terrains, wherein one or more neural networks from a group formed by the at least one global neural network and the at least one local neural network are selected based on the position of the unmanned autonomous vehicle, wherein during the processing of the at least one image by the processor, each of the selected neural networks determ ines at least one classification of at least one soil type in the at least one image, and wherein the at least one control signal is created based on the classifications determined by the selected neural networks.
2. Method according to claim 1 , characterized in, that the at least one control signal is created according to predetermined rules based on the classifica- tions determined by the selected neural networks, wherein at least two dif- ferent sets of predetermined rules are defined, and a set of predetermined rules being selected depending on the position of the unmanned autonomous vehicle.
Method according to ciaim 2, characterized in, that for each control signal, a specific selection of a set of predetermined rules is selected based on the position of the unmanned autonomous vehicle. Method according to any of the preceding claims 1 -3, characterized in, that a specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network is made for each control signal based on the position of the unmanned autonomous vehicle. Method according to any of the preceding claims 1 -4, characterized in, that a first group of zones is defined on a digital map of the terrain, wherein each zone from the first group of zones is associated with a specific selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network, wherein the digital map of the terrain, comprising the zones from the first group of zones and the associated selections of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network, is loaded into the unmanned autonomous vehicle, and wherein the unmanned autonomous vehicle determines, based on the position, the zone from the first group of zones in which the unmanned au- tonomous vehicle is located and uses the therewith associated selection of one or more neural networks from the group formed by the at least one global neural network and/or the at least one local neural network while pro- cessing the at least one image of the terrain. Method according to any of the preceding claims 2-5, characterized in, that a second group of zones is defined on a digital map of the terrain, wherein a specific selection of a set of predetermined rules is associated with each zone from the second group of zones, wherein the digital map of the terrain, comprising the zones from the second group of zones and the asso- ciated selections of sets of predetermined rules, is loaded into the unmanned autonomous vehicle, and wherein the unmanned autonomous vehicle deter- m ines, based on the position, the zone from the second group of zones in which the unmanned autonomous vehicle is located and uses the therewith associated set of predetermined rules while creating the at least one control signal.
. Method according to any of the preceding claims 5-6, characterized in, that the first group of zones is equal to the second group of zones. . Method according to any of the preceding claims 5-7, characterized in, that zones are defined hierarchically, wherein associated selections of neural networks and/or associated sets of predetermined rules from a hierarchically higher zone are automatically associated with a hierarchically lower zone and wherein associated selections of neural networks and/or associated sets of predetermined rules from a hierarchically lower zone are not associated with a hierarchically higher zone. . Method according to any of the preceding claims 1 -8, characterized in, that images for a local training set are recorded using the camera of the unmanned autonomous vehicle. 0. Method according to claim 9, characterized in, that the images of the local training set for training a local neural network are processed with the aid of the processor of the unmanned autonomous vehicle. 1 . Method according to any of the preceding claims 9-10, characterized in, that the images of the local training set are forwarded to an external pro- cessing unit and added to one or more global training sets, after which the one or more global training sets are processed by the external processing unit for training one or more global neural networks. 2. Method according to any of the preceding claims 1 -11, characterized in, that the unmanned autonomous vehicle determ ines its position using a dig- ital map of the terrain and based on images of the terrain, which are captured using the camera of the unmanned autonomous vehicle, wherein reference points in the images are compared with reference points on the digital map.
3. Method according to any of the preceding claims 1 -12, characterized in, that at least one of the selected neural networks determ ines at least one classification of at least one object in the at least one image, wherein the at least one control signal is created based on the classifications for soil types and objects determ ined by the selected neural networks.
4. Method according to any of the preceding ciaims 1 -13, characterized in, that a weight is assigned to each of the classifications determined by the selected neural networks, where the weight depends on the position of the unmanned autonomous vehicle.
5. Method according to any of the preceding claims 1 -14, characterized in, that based on the classifications determined by the selected neural net- works, it is determined whether a soil type belongs to a set of predetermined soil types, wherein the unmanned autonomous vehicle only moves over soil types from the set of predetermined soil types.
6. Unmanned autonomous vehicle for performing tasks on a terrain comprising a drive unit for moving the unmanned vehicle across the terrain, a camera for capturing images of the terrain, a positioning means for determ ining a position of the unmanned vehicle on the terrain, a processor and memory, and a tool, characterized in, that the processor is configured to perform a method according to any of claims 1 -15.
7. Unmanned autonomous vehicle according to claim 16, characterized in, that the camera of the unmanned autonomous vehicle is only suitable for taking two-dimensional images.
8. Use of a method according to any of claims 1 -15 and/or an unmanned au- tonomous vehicle according to any of claims 16- 17 for autonomously main- taining a garden.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BE2022/5191 | 2022-03-18 | ||
BE20225191A BE1030358B1 (en) | 2022-03-18 | 2022-03-18 | METHOD FOR CONTROL OF AN UNMANNED AUTONOMOUS VEHICLE BASED ON A SOIL TYPE |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023175580A1 true WO2023175580A1 (en) | 2023-09-21 |
Family
ID=80952451
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2023/052640 WO2023175580A1 (en) | 2022-03-18 | 2023-03-17 | Working method for controlling an unmanned autonomous vehicle based on a soil type |
Country Status (2)
Country | Link |
---|---|
BE (1) | BE1030358B1 (en) |
WO (1) | WO2023175580A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018220528A1 (en) | 2017-05-30 | 2018-12-06 | Volta Robots S.R.L. | Method for controlling a soil working means based on image processing and related system |
US20190212752A1 (en) * | 2018-01-05 | 2019-07-11 | Irobot Corporation | Mobile cleaning robot teaming and persistent mapping |
US20210031367A1 (en) * | 2019-07-31 | 2021-02-04 | Brain Corporation | Systems, apparatuses, and methods for rapid machine learning for floor segmentation for robotic devices |
US11037320B1 (en) * | 2016-03-01 | 2021-06-15 | AI Incorporated | Method for estimating distance using point measurement and color depth |
-
2022
- 2022-03-18 BE BE20225191A patent/BE1030358B1/en active IP Right Grant
-
2023
- 2023-03-17 WO PCT/IB2023/052640 patent/WO2023175580A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11037320B1 (en) * | 2016-03-01 | 2021-06-15 | AI Incorporated | Method for estimating distance using point measurement and color depth |
WO2018220528A1 (en) | 2017-05-30 | 2018-12-06 | Volta Robots S.R.L. | Method for controlling a soil working means based on image processing and related system |
US20190212752A1 (en) * | 2018-01-05 | 2019-07-11 | Irobot Corporation | Mobile cleaning robot teaming and persistent mapping |
US20210031367A1 (en) * | 2019-07-31 | 2021-02-04 | Brain Corporation | Systems, apparatuses, and methods for rapid machine learning for floor segmentation for robotic devices |
Also Published As
Publication number | Publication date |
---|---|
BE1030358B1 (en) | 2023-10-17 |
BE1030358A1 (en) | 2023-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11297755B2 (en) | Method for controlling a soil working means based on image processing and related system | |
US11710255B2 (en) | Management and display of object-collection data | |
BE1024859B1 (en) | AN ENERGETIC AUTONOMOUS, SUSTAINABLE AND INTELLIGENT ROBOT | |
CN111372442B (en) | System and method for operating an autonomous robotic work machine within a travel limit | |
US20230042867A1 (en) | Autonomous electric mower system and related methods | |
JP2018108040A (en) | Work machine, control device, and program for control | |
CN108873845A (en) | A kind of joint greening clipping device and its working method based on artificial intelligence | |
CN114342640A (en) | Data processing method, automatic gardening equipment and computer program product | |
WO2021139683A1 (en) | Self-moving device | |
WO2023175580A1 (en) | Working method for controlling an unmanned autonomous vehicle based on a soil type | |
US11849668B1 (en) | Autonomous vehicle navigation | |
WO2024213055A1 (en) | Control method and apparatus, storage medium, and electronic device | |
US20240134394A1 (en) | Automatic work system and turning method therefor, and self-moving device | |
WO2023146451A1 (en) | Improved operation for a robotic work tool system | |
WO2023238081A1 (en) | Method for determining a work zone for an unmanned autonomous vehicle | |
WO2021042486A1 (en) | Automatic working system, automatic walking device and control method therefor, and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23714336 Country of ref document: EP Kind code of ref document: A1 |