GB2596834A - Mobile robot system and method for moving a mobile robot to a destination location - Google Patents

Mobile robot system and method for moving a mobile robot to a destination location Download PDF

Info

Publication number
GB2596834A
GB2596834A GB2010472.5A GB202010472A GB2596834A GB 2596834 A GB2596834 A GB 2596834A GB 202010472 A GB202010472 A GB 202010472A GB 2596834 A GB2596834 A GB 2596834A
Authority
GB
United Kingdom
Prior art keywords
image
mobile robot
destination
noise
noise filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2010472.5A
Other versions
GB202010472D0 (en
Inventor
Panagi Geromichalos Dimitrios
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Continental Automotive GmbH
Original Assignee
Continental Automotive GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive GmbH filed Critical Continental Automotive GmbH
Priority to GB2010472.5A priority Critical patent/GB2596834A/en
Publication of GB202010472D0 publication Critical patent/GB202010472D0/en
Publication of GB2596834A publication Critical patent/GB2596834A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/02Docking stations; Docking operations

Abstract

A mobile robot system 100 for moving a mobile robot 190 to a destination location. The robot has an image provider 112 to capture images, an image noise filter 104 to remove noise from the images and a destination determiner 110 to determine whether the robot has arrived at a location based on the filtered image. The image noise filter has a trained neural network 120 and may be configured to remove environmental noise such as precipitation, haze, and illumination. The noise filter may also remove temporary objects from the image. The destination determiner may compare the filtered image with a destination image to determine whether the robot has arrived at the location. The comparison may use an image feature matcher 108. The mobile robot may be an autonomous vacuum cleaner, a security patrolling robot or a delivery robot. Captured images may be 2D or 3D images provided by at least one image capturing device, such as a camera or LiDAR sensor.

Description

MOBILE ROBOT SYSTEM AND METHOD FOR MOVING A MOBILE ROBOT TO A DESTINATION LOCATION
FIELD OF THE INVENTION
The invention relates to a mobile robot system for moving a mobile robot to a destination location, and a method of moving a mobile robot to a destination location.
BACKGROUND
A mobile robot is a machine movable from a first location to a second location. A mobile robot may be operated indoors or outdoors, and domestically or industrially. A mobile robot may be an autonomous movable machine, such as an autonomous vacuum cleaner, a security patrolling robot or a delivery robot. A mobile robot may be configured to move from a particular origin location to a specific destination location, for instance, an autonomous vacuum cleaner configured to return to a docking station after vacuuming, a security patrolling robot configured to continually patrol between a first location and a second location, or a delivery robot configured to transport an object from an origin location to a destination location. However, it may be difficult for a mobile robot to locate a destination location, and thus pose challenges for operating the mobile robot.
SUMMARY
An objective is to provide a mobile robot system that allows a 30 mobile robot to accurately locate a destination location, or a corresponding method.
According to a first aspect of the invention, there is provided a mobile robot system for moving a mobile robot to a destination location, the mobile robot system comprising: an image provider configured to provide an image; an image noise filter comprising a trained neural network system configured to remove noise from the image; and a destination determiner configured to determine whether the mobile robot has arrived at the destination location based on the filtered image.
The image noise filter advantageously is able to remove or filter noise from an image so that the image maybe accurately assessed by the mobile robot system. Hence, interesting features in the image may be accurately identified after the image noise filter removes at least some of the noise in the image. Moreover, a trained neural network system may be able to effectively remove at least some of the noise in an image. Thus, the destination determiner is able to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered. Furthermore, the mobile robot system is advantageously able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Optionally, the image noise filter comprises a trained generative variational autoencoder neural network system configured to remove noise from the image. Advantageously, the trained generative variational autoencoder neural network system may be able to effectively remove at least some of the noise in an image.
Optionally, the image noise filter comprises a trained generative adversarial network neural network system configured to remove noise from the image. Advantageously, the trained generative adversarial network neural network system may be able to effectively remove at least some of the noise in an image.
Optionally, the image noise filter comprises an image external noise filter configured to remove environmental noise from the image The image external noise filter, advantageously, is able to remove environmental noise in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external noise filter removes at least some of the environmental noise in the image.
Environmental noise may at least partially obstruct an in-teresting feature in an image, andmaybecausedbyprecipitation, haze, illumination or a temporary object captured in an image. Precipitation includes rain, snow or hail. Haze includes air that comprises small drops of liquid, small solid particles or gas, through which is difficult to see. Environmental noise caused by illumination may, for example, be due to insufficient light, for instance, along a street with no lights on a moonless night, or too much light, for instance, blinding rays of sunlight falling on a lens of an image capturing device. A temporary object is an object that is positioned at a certain location for a period of time. A temporary object may be an animate object, such as a human being or an annual plant, or an inanimate object, such as a vehicle. A temporary object may have been moving, such as a walking pedestrian, or stationary, such as temporary signage.
Optionally, the image noise filter comprises an image external precipitation noise filter configured to remove environmental noise caused by precipitation from the image. The image external precipitation noise filter, advantageously, is able to remove environmental noise caused by precipitation in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external precipitation noise filter removes at least some of the environmental noise caused by precipitation from the image. Precipitation includes rain, snow or hail.
Optionally, the image noise filter comprises an image external haze noise filter configured to remove environmental noise caused by haze from the image. The image external haze noise filter, advantageously, is able to remove environmental noise caused by haze in an image Hence, interesting features in the image may subsequently be accurately identified after the image external haze noise filter removes at least some of the environmental noise caused by haze from the image Haze includes air that comprises small drops of liquid, small solid particles or gas, through which is difficult to see.
Optionally, the image noise filter comprises an image external illumination noise filter configured to remove environmental noise caused by illumination from the image. The image external illumination noise filter, advantageously, is able to remove environmental noise caused by illumination in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external illumination noise filter removes at least some of the environmental noise caused by illumination from the image. Environmental noise caused by illumination may, for example, be due to insufficient light, for instance, along a street with no lights on a moonless night, or too much light, for instance, blinding rays of sunlight falling on a lens of an image capturing device.
Optionally, the image noise filter comprises an image temporary object noise filter configured to remove environmental noise caused by a temporary object from the image. The image external temporary object noise filter, advantageously, is able to remove environmental noise caused by a temporary object in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external temporary object noise filter removes at least some of the environmental noise caused by the temporary object from the image. A temporary object is an object that is positioned at a certain location for a period of time. A temporary object may be an animate object, such as a human being or an annual plant, or an inanimate object, such as a vehicle. A temporary object may have been moving, such as a walking pedestrian, or stationary, such as temporary signage.
Optionally, the mobile robot system further comprises a network device configured to receive the image from a network. Ad-vantageously, the network device may allow the mobile robot system to retrieve the image from a network, for instance, to download from a remotely located server or to receive the image sent from a mobile phone through a mobile network.
Optionally, the mobile robot system further comprises a nonvolatile memory device configured to store the image. The stored image may, advantageously, be used subsequently.
Optionally, the mobile robot system further comprises an image capturing device configured to capture the image. The image capturing device may be used to capture an image of or at the current location of a mobile robot, in order to, advantageously, allow the current location of the mobile robot to be determined.
Optionally, the mobile robot system further comprises an image preprocessor configured to preprocess the image. Advantageously, the image preprocessor is able to prepare the image such that it is suitable to be processed by another module in the mobile robot system, such as by the image noise filter to remove or filter noise from an image.
Optionally, the mobile robot system further comprises an image feature identifier configured to identify interesting features of the filtered image. Advantageously, the image feature identifier is able to identify interesting features of the filtered image, so that the interesting features of the filtered image may be accurately and quickly matched to interesting features of another image.
Optionally, the mobile robot system further comprises an image feature matcher configured to match interesting features of the filtered image with interesting features of another image Advantageously, the image feature matcher is able to match the interesting features of the filtered image with the interesting features of the another image, so that the destination determiner is able to accurately determine whether the mobile robot has arrived at the destination location based on the filtered image Optionally, a mobile robot comprises the mobile robot system.
Optionally, there is provided a method of moving a mobile robot to a destination location using the mobile robot system, the method comprising the acts of: providing the image; filtering noise, by the trained neural network, from the image; and determining whether the mobile robot has arrived at the destination based on the filtered image.
Therefore, advantageously, the mobile robot system is able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location. Moreover, the mobile robot system is able to, advantageously, accurately determine its current location or whether it has arrived at the destination location without totally relying on the process of dead reckoning.
Any feature or step disclosed in the context of the first aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the first aspect of the invention, and in the inventions generally.
According to a second aspect of the invention, there is provided a mobile robot comprising a mobile robot system for moving the mobile robot to a destination location, wherein the mobile robot system comprises: an image provider configured to provide an image; an image noise filter comprising a trained neural network system configured to remove noise from the image, wherein the image noise filter comprises: a trained generative adversarial network neural network system configured to remove noise from the image; an image external precipitation noise filter configured to remove environmental noise caused by precipitation from the image; an image external haze noise filter configured to remove environmental noise caused by haze from the image; an image external illumination noise filter configured to remove environmental noise caused by illumination from the image; and an image temporary object noise filter configured to remove environmental noise caused by a temporary object from the image; 5 a destination determiner configured to determine whether the mobile robot has arrived at the destination location based on the filtered image; a network device configured to receive the image from a network; an image capturing device configured to capture the image; and an image feature matcher configured to match 10 interesting features of the filtered image with interesting features of another image.
Therefore, advantageously, the mobile robot is able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location. Moreover, the mobile robot is able to, advantageously, accurately determine its current location or whether it has arrived at the destination location without totally relying on the process of dead reckoning.
Any feature or step disclosed in the context of the second aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the second aspect of the invention, and in the inventions generally.
According to a third aspect of the invention, there is provided a method of moving a mobile robot to a destination location, the method comprising the acts of: providing an image; fil-tering noise, by a trained neural network, from the image; and determining whether the mobile robot has arrived at the destination based on the filtered image.
The step of filtering the noise, advantageously, may allow the image to be accurately assessed in a subsequent step, such that interesting features in the image may be accurately identified after at least some of the noise in the image has been removed.
Moreover, a trained neural network system may be able to effectively remove at least some of the noise in an image, in order to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered.
Furthermore, the method is advantageously able to allow a mobile robot to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Any feature or step disclosed in the context of the third aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the third aspect of the invention, and in the inventions generally.
According to a fourth aspect of the invention, there is provided a non-transitory computer-readable medium with instructions stored thereon, that when executed, perform a method of moving a mobile robot to a destination location comprising the acts of: providing an image; filtering noise, with a trained neural network, from the image; and determining whether the mobile robot has arrived at the destination based on the filtered image.
The step of filtering the noise, advantageously, may allow the image to be accurately assessed in a subsequent step, such that interesting features in the image may be accurately identified after at least some of the noise in the image has been removed. Moreover, a trained neural network system may be able to effectively remove at least some of the noise in an image, in order to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered. Furthermore, the method is advantageously able to allow a mobile robot to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Any feature or step disclosed in the context of the fourth aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the fourth aspect of the invention, and in the inventions generally.
According to a fifth aspect of the invention, there is provided a mobile robot system for moving a mobile robot to a destination location, the mobile robot system comprising: an image provider configured to provide a destination image of the destination location; an image noise filter configured to remove noise from the destination image of the destination location; and a destination determiner configured to determine whether the mobile robot has arrived at the destination location based on the filtered destination image.
The image noise filter advantageously is able to remove or filter noise from the destination image so that the destination image may be accurately assessed by the mobile robot system. Hence, interesting features in the destination image maybe accurately identified after the image noise filter removes at least some of the noise in the destination image. In addition, the destination determiner is able to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised destination image, from which at least some of the noise has been removed or filtered. Furthermore, the mobile robot system is advantageously able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Optionally, the image noise filter comprises a trained neural network system configured to remove noise from the destination image of the destination location. Advantageously, a trained neural network system ay be able to effectively remove at least some of the noise in an image.
Optionally, the image noise filter comprises a trained generative variational autoencoder neural network system configured to remove noise from the image. Advantageously, the trained generative variational autoencoder neural network system may be able to effectively remove at least some of the noise in an image.
Optionally, the image noise filter comprises a trained generative adversarial network neural network system configured to remove noise from the image. Advantageously, the trained generative adversarial network neural network system may be able to effectively remove at least some of the noise in an image Optionally, the image noise filter comprises an image external noise filter configured to remove environmental noise from the image. The image external noise filter, advantageously, is able to remove environmental noise in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external noise filter removes at least some of the environmental noise in the image.
Optionally, the image noise filter comprises an image external precipitation noise filter configured to remove environmental noise caused by precipitation from the image. The image external precipitation noise filter, advantageously, is able to remove environmental noise caused by precipitation in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external precipitation noise filter removes at least some of the environmental noise caused by precipitation from the image. Precipitation includes rain, snow or hail.
Optionally, the image noise filter comprises an image external haze noise filter configured to remove environmental noise caused by haze from the image. The image external haze noise filter, advantageously, is able to remove environmental noise caused by haze in an image Hence, interesting features in the image may subsequently be accurately identified after the image external haze noise filter removes at least some of the environmental noise caused by haze from the image. Haze includes air that comprises small drops of liquid, small solid particles or gas, through which is difficult to see.
Optionally, the image noise filter comprises an image external illumination noise filter configured to remove environmental noise caused by illumination from the image. The image external illumination noise filter, advantageously, is able to remove environmental noise caused by illumination in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external illumination noise filter removes at least some of the environmental noise caused by illumination from the image. Environmental noise caused by illumination may, for example, be due to insufficient light, for instance, along a street with no lights on a moonless night, or too much light, for instance, blinding rays of sunlight falling on a lens of an image capturing device.
Optionally, the image noise filter comprises an inage temporary object noise filter configured to remove environmental noise caused by a temporary object from the image. The image external temporary object noise filter, advantageously, is able to remove environmental noise caused by a temporary object in an image. Hence, interesting features in the image may subsequently be accurately identified after the image external temporary object noise filter removes at least some of the environmental noise caused by the temporary object from the image. A temporary object is an object that is positioned at a certain location for a period of time. A temporary object may be an animate object, such as a human being or annual plant, or an inanimate object, such as a vehicle. A temporary object may have been moving, such as a walking pedestrian, or stationary, such as temporary signage.
Optionally, the mobile robot system further comprises a network device configured to receive the image from a network. Ad-vantageously, the network device may allow the mobile robot system to retrieve the image from a network, for instance, to download from a remotely located server or to receive the image sent from a mobile phone through a mobile network.
Optionally, the mobile robot system further comprises a nonvolatile memory device configured to store the image. The stored image may, advantageously, be used subsequently.
Optionally, the mobile robot system further comprises an image capturing device configured to capture the image. The image capturing device may be used to capture an image of or at the current location of a mobile robot, in order to, advantageously, allow the current location of the mobile robot to be determined.
Optionally, the mobile robot system further comprises an image preprocessor configured to preprocess the image. Advantageously, the image preprocessor is able to prepare the image such that it is suitable to be processed by another module in the mobile robot system, such as by the image noise filter to remove or filter noise from an image.
Optionally, the mobile robot system further comprises an image feature identifier configured to identify interesting features of the filtered image. Advantageously, the image feature identifier is able to identify interesting features of the filtered image, so that the interesting features of the filtered image may be accurately and quickly matched to interesting features of another image.
Optionally, the mobile robot system further comprises an image feature matcher configured to match interesting features of the filtered image with interesting features of another image. 5 Advantageously, the image feature matcher is able to match the interesting features of the filtered image with the interesting features of the another image, so that the destination determiner is able to accurately determine whether the mobile robot has arrived at the destination location based on the filtered image. 10 Therefore, advantageously, the mobile robot system is able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location. Moreover, the mobile robot system is able to, advantageously, accurately determine its current location or whether it has arrived at the destination location without totally relying on the process of dead reckoning.
Any feature or step disclosed in the context of the fifth aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the fifth aspect of the invention, and in the inventions generally.
According to a sixth aspect of the Invention, there is provided a mobile robot comprising a mobile robot system for moving the mobile robot to a destination location, wherein the mobile robot system comprises: an image provider configured to provide a destination image of the destination location; an image noise filter configured to remove noise from the destination image of the destination location; wherein the image noise filter comprises: a trained generative adversarial network neural network system configured to remove noise from the destination image of the destination location; an image external precipitation noise filter configured to remove environmental noise caused by precipitation from the destination image of the destination location; an image external haze noise filter configured to remove environmental noise caused by haze from the destination image of the destination location; an image external illumination noise filter configured to remove environmental noise caused by illumination from the destination image of the destination location; and an image temporary object noise filter configured to remove environmental noise caused by a temporary object from the destination image of the destination location; a destination determiner configured to determine whether the mobile robot has arrived at the destination location based on the filtered destination image; a network device configured to receive the destination image from a network; and an image feature matcher configured to match the interesting features of the filtered destination image with interesting features of another image.
Therefore, advantageously, the mobile robot is able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location. Moreover, the mobile robot is able to, advantageously, accurately determine its current location or whether it has arrived at the destination location without totally relying on the process of dead reckoning.
Any feature or step disclosed in the context of the sixth aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the sixth aspect of the invention, and in the inventions generally.
According to a seventh aspect of the invention, there is provided a method of moving a mobile robot to a destination location, the method comprising the acts of: providing a destination image of the destination location; filtering noise from the destination image; and determining whether the mobile robot has arrived at the destination based on the filtered destination image.
The step of filtering the noise, advantageously, may allow the image to be accurately assessed in a subsequent step, such that interesting features in the image may be accurately identified after at least some of the noise in the image has been removed. Moreover, a trained neural network system may be able to effectively remove at least some of the noise in an image, in order to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered. Furthermore, the method is advantageously able to allow a mobile robot to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Any feature or step disclosed in the context of the seventh aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the seventh aspect of the invention, and in the inventions generally.
According to an eighth aspect of the invention, there is provided a non-transitory computer-readable medium with instructions stored thereon, that when executed, perform a method of moving a mobile robot to a destination location comprising the acts of: providing a destination image of the destination location; filtering noise from the destination image; and determining whether the mobile robot has arrived at the destination based on the filtered destination image.
The step of filtering the noise, advantageously, may allow the image to be accurately assessed in a subsequent step, such that interesting features in the image may be accurately identified after at least some of the noise in the image has been removed. Moreover, a trained neural network system may be able to effectively remove at least some of the noise in an image, in order to accurately determine whether the mobile robot has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered. Furthermore, the method is advantageously able to allow a mobile robot to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
Any feature or step disclosed in the context of the eighth aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of other aspects of the invention, and in the inventions generally. In addition, any feature or step disclosed in the context of any other aspect of the invention may also be used, to the extent possible, in combination with and/or in the context of the eighth aspect of the invention, and in the inventions generally.
In this summary, in the description below, in the claims below, and in the accompanying drawings, reference is made to particular features (including method steps) of the invention. It is to be understood that the disclosure of the invention in this specification includes all possible combinations of such particular features. For example, where a particular feature is disclosed in the context of a particular aspect or embodiment of the invention, or a particular claim, that feature can also be used, to the extent possible, in com-bination with and/or in the context of other particular aspects and embodiments of the invention, and in the inventions generally.
In this summary, in the description below, in the claims below, and in the accompanying drawings, where reference is made herein to a method comprising two or more defined steps, the defined 5 steps can be carried out in any order or simultaneously (except where the context excludes that possibility), and the method can include one or more other steps which are carried out before any of the defined steps, between two of the defined steps, or after all the defined steps (except where the context excludes that 10 possibility).
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "comprises" and grammatical equivalents thereof are used herein to mean that other components, ingredients, steps, et cetera are optionally present. For example, an article "comprising" (or "which comprises") components A, B, and C can consist of (that is, contain only) components A, B, and C, or can contain not only components A B, and C but also one or more other components.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "at least" followed by a number is used in to denote the start of a range beginning with that number (which maybe a range having an upper limit or no upper limit, depending on the variable being defined) . For example, "at least 1" means 1 or more than 1. The term "at most" followed by a number is used herein to denote the end of a range ending with that number (which may be a range having 1 or 0 as its lower limit, or a range having no lower limit, depending on the variable being defined). For example, "at most 4" means 4 or less than 4, and "at most 40%" means 40% or less than 40%. When, in this specification, a range is given as "(a first number) to (a second number)" or "(a first number) -(a second number)", this means a range whose lower limit is the first number and whose upper limit is the second number For example, 25 to 100 mm means a range whose lower limit is 25 mm, and whose upper limit is 100 mm.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "network" means at least two computers and/or devices operatively connected together, for instance, to permit data to be shared. Personal area network (PAN), local area network (LAN) and wide area network (WAN) are examples of types of network.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "volatile memory" means any type of computer memory where the contents of the memory are lost if there is no power to the computer. Random-access memory (RAM) is an example of a type of volatile memory. As used in the summary above, in this description, in the claims below, and in the accompanying drawings, the term "nonvolatile memory" or the term "non-transitory computer-readable medium" means any type of computer memory where the contents of the memory are retained even if there is no power to the computer. Hard disk and solid-state drive (SSD) are examples of types of nonvolatile memory or non-transitory comput-er-readable medium.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "neural network" or the term "artificial neural network" means a type of machine learning algorithm that uses a web of nodes, edges and layers. The first layer of a neural network comprises input nodes that accept data inputs from a data set. The input nodes then send information through the edges to the nodes in the next layer. Each edge comprises an activation function that is alterable during a training process. The final layer of the neural network comprises the output nodes that provide data outputs of the neural network. During the training process, the data outputs of the neural network are compared to the actual outputs of the data set. The differences between the data outputs of the neural network and the actual outputs of the data set are measured and denoted as an error value. The error value is then fed back to the neural network, which changes its activation functions in order to minimise the error value. The training process is an iterative process. After the neural network has been trained, the trained neural network may then be used to predict a data output from a particular data input. Convolutional neural network is an example of a type of artificial neural network.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term -generative adversarial network neural network system" means a type of neural network system comprising two neural networks configured to contest each other, for instance, in a zero-sum game setting. A generative adversarial network neural network system may be trained to generate authentic looking images. During the training process, a random input, comprising perhaps Gaussian noise, may be provided to a first neural network of the generative ad-versarial network neural network system, which learns to generate an image. The generated image and an actual image are provided to a second neural network of the generative adversarial network neural network system, which learns to recognise which of the generated image and the actual image was generated by the first neural network of the generative adversarial network neural network system. On the one hand, the first neural network of the generative adversarial network neural network system is penalised if the second neural network recognises which of the generated image and the actual image was generated by the first neural network. On the other hand, the second neural network of the generative adversarial network neural network system is penalised if the second neural network fails to recognise which of the generated image and the actual image was generated by the first neural network. After the generative adversarial network neural network system has been trained, the trained generative adversarial network neural network system may then be used to generate authentic looking images. A generative adversarial network neural network system may also be trained to remove or filter noise from images.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "image" means a two-dimensional or three-dimensional picture of an actual location in the real world. An image may be captured by one single image capturing device, such as a camera or a LiDAR sensor, or created by fusing data from several devices, such as ultrasonic sensor, LiDAR sensor, radar sensor or camera.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term -noise-means information or data that is not wanted and that may make it difficult for important or useful information or data to be read.
In the context of an image, the image may comprise internal noise and external noise. Internal noise may be caused by an image capturing device used to capture the image. An example of external noise is environmental noise, such as precipitation, haze, illumination or unwanted objects. For instance, in an image of a building, the building may be occluded by external environmental noise may include rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "pose" means a particular position and orientation adopted. A position may be defined using the commonly used mathematical notations of x axis, y axis and z axis, and an orientation may be expressed in terms of yaw angle, pitch angle and roll angle.
As used in this summary, in the description below, in the claims below, and in the accompanying drawings, the term "interesting feature" means a thing in an image that allows a location, position, pose or person to be determined or recognised. In the context of a location, position or pose, an interesting feature maybe a permanent object, such as a building, a statute, a pillar, a cabinet, a road sign, a tree, a pavilion, a lake or a mountain. In the context of a person, an interesting feature may be a facial feature, such as an eye, a mouth, a nose or an ear.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages will become better understood with regard to the following description, appended claims, and accompanying drawings where: Figure 1 shows a mobile robot system for moving a mobile robot to a destination location; Figure 2 shows a trained neural network system comprising a trained generative variational autoencoder neural network system; Figure 3 shows a trained neural network system comprising a trained generative adversarial network neural network system; Figure 4 shows a mobile robot comprising the mobile robot system of Figure 1; and Figure 5 shows a diagram for a method of moving the mobile robot 15 of Figure 4 to a destination Location, using the mobile robot system of Figure 1.
In the drawings, like parts are denoted by like reference numerals.
DESCRIPTION
In the summary above, in this description, in the claims below, and in the accompanying drawings, referenceismadetoparticular features (including method steps) of the invention. It is to be understood that the disclosure of the invention in this specification includes all possible combinations of such particular features. For example, where a particular feature is disclosed in the context of a particular aspect or embodiment of the invention, or a particular claim, that feature can also be used, to the extent possible, In combination with and/or in the context of other particular aspects and embodiments of the invention, and in the inventions generally.
Figure 1 shows a mobile robot system 100 for moving a mobile robot (190: Figure 4) to a destination location. Themobilerobotsystem 100 comprises an image preprocessor 102, an image noise filter 104, an image feature identifier 106, an image feature matcher 108, a destination determiner 110 and an image provider 112. The mobile robot system 100 may further comprise a network device 114, a memory device 116 and an image capturing device 118. The mobile robot system 100 may be realised by software, hardware or a combination of software and hardware. The mobile robot system 100 maybe configured to be comprised in a network, wherein at least part of the mobile robot system 100 is located in a location remote from the mobile robot (190: Figure 4), such as a remotely located server connected to the mobile robot (190: Figure 4) via the network device 114. For instance, at least one of the image preprocessor 102, the image noise filter 104, the image feature identifier 106, the image feature matcher 108, the destination determiner 110 or the image provider 112 may be comprised in a computer comprised in the network, wherein the computer is located in a location remote from the mobile robot (190: Figure 4).
The network device 114 is configured to allow the mobile robot system 100 to access a network, such as an intranet, an extranet or the internet. The network device 114 may comprise a network interface card, a hub, a switch, abridge or a modem. The network device 114 may comprise software, hardware or a combination of software and hardware. Advantageously, the network device 114 may allow the mobile robot system 100 to retrieve an image from a network, for instance, todownloadfromaremotelylocatedserver or to receive the image sent from a mobile phone through a mobile network.
The image capturing device 118 is configured to capture an image of or at the current location of the mobile robot (190: Figure 4) . The image capturing device 118 may comprise at least one of an ultrasonic sensor, a LiDAR sensor, a radar sensor or a camera for capturing images of or at the current location, position or pose of the mobile robot (190: Figure 4). The image capturing device 118 may also comprise memory for storing the images captured. The image capturing device 118, advantageously, allows the current location of the mobile robot (190: Figure 4) to be determined.
The image provider 112 is configured to obtain an image from the image capturing device 118, or from a network via the network device 114. The image obtained may be of the current location, position or pose of the mobile robot (190: Figure 4), of the destination location, position or pose or of a person, such as an intended recipient of a delivery. The image provider 112 is also configured to provide the image to another module in the mobile robot system 100, for instance, the image preprocessor 102, the image noise filter 104, the image feature identifier 106, the image feature matcher 108 or the destination determiner 110. The image provider 112 may comprise software, hardware or a combination of software and hardware.
The image preprocessor 102 is configured to preprocess or prepare an image such that it is suitable to be processed by another module in the mobile robot system 100, such as by the image noise filter 104, the image feature identifier 106, the image feature matcher 108 or the destination determiner 110. The image preprocessor 102 may crop, rotate, compress, decompress or adjust the colours of the image. The image preprocessor 102 may comprise software, hardware or a combination of software and hardware.
The image noise filter 104 is configured to remove or filter noise from an image. The image noise filter 104 comprises a trained neural network system 120 and an image external noise filter 122 operatively connected to each other. The image noise filter 104 may comprise software, hardware or a combination of software and hardware.
The trained neural network system 120 may comprise a trained generative variational autoencoder neural network system (132: Figure 2) or a trained generative adversarial network neural network system (134: Figure 3). The trained neural network system 120 maybe configured to receive an image as an Input and generate a filtered image as an output. The trained neural network system 120 may be configured to remove noise, such as precipitation, haze, illumination or unwanted object, from the image provided.
Hence, if a building in the image is occluded by noise, for instance, rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past, the trained neural network system 120 may remove the noise and generate the filtered image that provides a clear unobstructed view of the building. The trained neural network system 120 may comprise software, hardware or a combination of software and hardware.
Figure 2 shows the trained neural network system 120 comprising the trained generative variational autoencoder neural network system 132. The trained generative variational autoencoder neural network system 132 may be configured to receive an image as an input and generate a filtered image as an output. The trained generative variational autoencoder neural network system 132 may be configured to remove noise, such as precipitation, haze, illumination or unwanted object, from the image provided. Hence, if a building in the image is occluded by noise, for instance, rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past, the trained generative variational autoencoder neural network system 132 may remove the noise and generate the filtered image that provides a clear unobstructed view of the building. The trained generative variational autoencoder neural network system 132 may comprise software, hardware or a combination of software and hardware. Advantageously, the trained generative variational autoencoder neural network system 132 may be able to effectively remove at least some of the noise in an image.
Figure 3 shows the trained neural network system 120 comprising the trained generative adversarial network neural network system 134. The trained generative adversarial network neural network system 139 may be configured to receive an image as an input and generate a filtered image as an output. The trained generative adversarial network neural network system 134 maybe configured to remove noise, such as precipitation, haze, illumination or unwanted object, from the image provided. Hence, if a building in the image is occluded by noise, for instance, rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past, the trained generative adversarial network neural network system 134 may remove the noise and generate the filtered image that provides a clear 5 unobstructed view of the building. The trained generative adversarial network neural network system 134 may comprise software, hardware or a combination of software and hardware. Advantageously, the trained generative adversarial network neural network system 134 may be able to effectively remove at 10 least some of the noise in an image.
The image external noise filter 122 is configured to remove at least some environmental noise from an image, so that interesting features in the image may subsequently be accurately identified.
The image external noise filter 122 is operatively connected to the trained neural network system 120. Hence, the image external noise filter 122 may use the trained neural network system 120 to remove at least some environmental noise from an image. In other words, the image external noise filter 122 may use the trained generative adversarial network neural network system 134 or the trained generative variational autoencoder neural network system 132 to remove at least some environmental noise from the image.
The image external noise filter 122 may comprise an image external precipitation noise filter 124, an image external haze noise filter 126, an image external illumination noise filter 128, an image temporaryobjectnoise filter 130 or combinations thereof. The image external noise filter 122 may comprise software, hardware or a combination of software and hardware.
Environmental noise may at least partially obstruct an interesting feature in an image, andmaybecausedbyprecipitation, haze, illumination or a temporary object captured in an image.
Precipitation includes rain, snow or hail. Haze includes air that comprises small drops of liquid, small solid particles or gas, through which is difficult to see. Environmental noise caused by illumination may, for example, be due to insufficient light, for instance, along a street with no lights on a moonless night, or too much light, for instance, blinding rays of sunlight falling on a lens of an image capturing device. A temporary object is an object that is positioned at a certain location for a period of time. A temporary object may be an animate object, such as a human being or an annual plant, or an inanimate object, such as a vehicle. A temporary object may have been moving, such as a walking pedestrian, or stationary, such as temporary signage.
The image external precipitation noise filter 124 is configured to remove environmental noise caused by precipitation from an image, so that interesting features in the image may subsequently be accurately identified. The image external precipitation noise filter 124 is operatively connected to the trained neural network system 120. Hence, the image external precipitation noise filter 124 may use the trained neural network system 120 to remove at least some environmental noise from an image. In other words, the image external precipitation noise filter 124 may use the trained generative adversarial network neural network system 134 or the trained generative variational autoencoder neural network system 132 to remove at least some environmental noise from the image. The image external precipitation noise filter 124 may comprise software, hardware or a combination of software and hardware.
The image external haze noise filter 126 is configured to remove environmental noise caused by haze from an image, so that interesting features in the image may subsequently be accurately identified. The image external haze noise filter 126 is operatively connected to the trained neural network system 120.
Hence, the image external haze noise filter 126 may use the trained neural network system 120 to remove at least some environmental noise from an image. In other words, the image external haze noise filter 126 may use the trained generative adversarial network neural network system 134 or the trained generative variational autoencoder neural network system 132 to remove at least some environmental noise from the image. The image external haze noise filter 126 may comprise software, hardware or a combination of software and hardware.
The image external illumination noise filter 128 is configured to remove environmental noise caused by illumination in an image, so that interesting features in the image may subsequently be accurately identified. The image external illumination noise filter 128 is operatively connected to the trained neural network system 120. Hence, the image external illumination noise filter 128 may use the trained neural network system 120 to remove at least some environmental noise from an image In other words, the image external illumination noise filter 128 may use the trained generative adversarial network neural network system 134 or the trained generative variational autoencoder neural network system 132 to remove at least some environmental noise from the image. The image external illumination noise filter 128 may comprise software, hardware or a combination of software and hardware.
The image external temporary object noise filter 130 is configured to remove environmental noise caused by a temporary object in an image, so that interesting features in the image may subsequently be accurately identified. The image external temporary object noise filter 130 is operatively connected to the trained neural network system 120. Hence, the image external temporary object noise filter 130 may use the trained neural network system 120 to remove at least some environmental noise from an image. In other words, the image external temporary object noise filter 130 may use the trained generative adversarial network neural network system 134 or the trained generative variational autoencoder neural network system 132 to remove at least some environmental noise from the image. The image external temporary object noise filter 130 may comprise software, hardware or a combination of software and hardware.
Therefore, the image noise filter 104 advantageously is able to remove or filter noise from an image so that the image may be accurately assessed by the mobile robot system 100, so that interesting features in the image may be accurately identified after the image noise filter 104 removes at least some of the noise in the image. Thus, the destination determiner 110 is able to accurately determine whether the mobile robot (190: Figure 4) has arrived at the destination location based on the filtered or denoised image, from which at least some of the noise has been removed or filtered. Furthermore, the mobile robot system 100 is advantageously able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data, such as previously acquired images related to its current location or the destination location.
The image feature identifier 106 is configured to identify interesting features of an image, so that the interesting features of the image may he accurately and quickly matched to interesting features of another image. The image feature identifier 106 may use any suitable technique, such as a machine learning algorithm, to identify interesting features of an image. For instance, the image feature identifier 106 may use a neural network algorithm or a decision tree algorithm to identify interesting features of the image. The image feature identifier 106 may comprise software, hardware or a combination of software and hardware.
The image feature matcher 108 is configured to match interesting features of an image with interesting features of another image, so that the destination determiner 110 is able to accurately determine whether the mobile robot (190: Figure 4) has arrived at the destination location based on the image. The image feature matcher 108 may use any suitable technique, such as a machine learning algorithm, to match interesting features of an image with interesting features of another image. For instance, the image feature matcher 108 may use a neural network algorithm or a decision tree algorithm to match the interesting features of the image with the interesting features of the another image. The image feature matcher 108 may comprise software, hardware or a combination of software and hardware.
The destination determiner 110 is configured to determine whether the mobile robot (190: Figure 4) has arrived at the destination location based on an image. The destination determiner 110 may use any suitable technique, such as a three-dimensional geometric technique or a geometric consistency technique, to determine whether the mobile robot (190: Figure 4) has arrived at the destination location based on an image. The destination determiner 110 may comprise software, hardware or a combination of software and hardware.
Figure 4 shows the mobile robot 190 comprising the mobile robot 10 system 100.
Figure 5 shows a diagram for a method 200 of moving the mobile robot 190 to a destination location, using the mobile robot system 100. A non-transitory computer-readable medium may comprise instructions stored thereon, that when executed by a processor, perform the method 200.
At step 202, the mobile robot system 100 initialises. At step 204, the image provider 112 retrieves a destination image of a destination location, position, pose or person. The image provider 112 may obtain the destination image from a network, such as an intranet, an extranet or the internet, via the network device 114. For instance, the =age provider 112 may obtain the destination image sent from a mobile phone through a mobile network, or download the destination image from a remotely located server. The image provider 112 may also obtain the destination image from the memory device 116.
At step 206, the image preprocessor 102 preprocesses or prepares the destination image so that it is suitable to be processed in a subsequent step. The image preprocessor 102 may crop, rotate, compress, decompress or adjust the colours of the destination image. At step 208, the image noise filter 104 filters or denoises the destination image. In other words, the image noise filter 104 removes at least some noise from the destination image. The image noise filter 104 may use the trained neural network system 120 to remove the noise from the destination image. The trained neural network system 120 may be configured to receive the destination image as an input and generate a filtered or denoised destination image as an output. The trained neural network system 120 maybe configured to remove noise, such as precipitation, haze, illumination or unwanted object, from the image provided. Hence, if a building in the image is occluded by noise, for instance, rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past, the trained neural network system 120 may remove the noise and generate the filtered destination image that provides a clear unobstructed view of the building. The image external noise filter 122 of the image noise filter 104 may remove at least some environmental noise from the destination image. The image external noise filter 122 may use the trained neural network system 120 to remove the noise from the destination image.
At step 210, the process of step 208 starts. At step 212, the image external precipitation noise filter 124 of the image external noise filter 122 removes environmental noise caused by precipitation from the destination image. At step 214, the image external haze noise filter 126 of the image external noise filter 122 removes environmental noise caused by haze from the destination image. At step 216, the image external illumination noise filter 128 of the image external noise filter 122 removes environmental noise caused by illumination from the destination image. At step 218, the image external temporary object noise filter 130 of the image external noise filter 122 removes environmental noise caused by a temporary object from the destination image. At step 220, the process of step 208 ends.
Then, at step 222, the image feature identifier 106 identifies interesting features of the filtered or denoised destination image.
At step 224, the image capturing device 118 captures a current image of a current location, position, pose or person. At step 226, the image provider 112 retrieves the current image of the current location, position, pose or person from the image capturing device 118.
At step 228, the image preprocessor 102 preprocesses or prepares the current image so that it is suitable to be processed in a subsequent step. The image preprocessor 102 may crop, rotate, compress, decompress or adjust the colours of the current image. At step 230, the image noise filter 104 filters or denoises the current image. In other words, the image noise filter 104 removes at least some noise from the current image. The image noise filter 104 may use the trained neural network system 120 to remove the noise from the current image. The image external noise filter 122 of the image noise filter 104 may remove at least some environmental noise from the current image. The image external noise filter 122 may use the trained neural network system 120 to remove the noise from the current image. The trained neural network system 120 may be configured to receive the current image as an input and generate a filtered or denoised current image as an output. The trained neural network system 120 maybe configured to remove noise, such as precipitation, haze, illumination or unwanted object, from the image provided. Hence, if a building in the image is occluded by noise, for instance, rain, smoke, fog, rays of sunlight falling on the lens of the image capturing device or a vehicle that was driving past, the trained neural network system 120 may remove the noise and generate the filtered current image that provides a clear unobstructed view of the building.
At step 232, the process of step 230 starts. At step 234, the image external precipitation noise filter 124 of the image external noise filter 122 removes environmental noise caused by precipitation from the current image. At step 236, the image external haze noise filter 126 of the image external noise filter 122 removes environmental noise caused by haze from the current image. At step 238, the image external illumination noise filter 128 of the image external noise filter 122 removes environmental noise caused by illumination from the current image. At step 240, the image external temporary object noise filter 130 of the image external noise filter 122 removes environmental noise caused by a temporary object from the current image. At step 242, the process of step 230 ends.
At step 244, the image feature identifier 106 identifies interesting features of the filtered or denoised current image. At step 246, the image feature matcher 108 matches the interesting features of the filtered or denoised current image with the interesting features of the filtered or denoised destination image. At step 248, the destination determiner 110 determines whether the mobile robot 190 has arrived at the destination location. The destination determiner 110 determines whether the mobile robot 190 has arrived at the destination location based on the filtered or denoised images. If the destination determiner 110 determines that the mobile robot 190 has not arrived at the destination location, the method 200 returns from step 248 to step 224. If the destination determiner 110 determines that the mobile robot 190 has arrived at the destination location, the method 200 ends at step 250.
At least one of the steps of the method 200 may be performed in a location remote from the mobile robot 190, such as in a computer 20 comprised in a network.
Therefore, advantageously, the mobile robot system 100 is able to accurately determine its current location or whether it has arrived at the destination location even if it has no access to previously obtained data acquired during a prior execution of the method 200, such as previously acquired images related to its current location or the destination location acquired during the prior execution of the method 200 to move the mobile robot 190 to the same destination location or another destination location.
Moreover, the mobile robot system 100 is able to, advantageously, accurately determine its current location or whether it has arrived at the destination location without totally relying on the process of dead reckoning.
Although the invention has been described in considerable detail with reference to certain embodiments or aspects, other embodiments or aspects are possible.
For example, instead of retrieving current images from the image capturing device 118, the mobile robot system 100 may be retrieving the current images from another image capturing device comprised in the mobile robot 190.
In addition, the various noise filtering steps 212, 214, 216, 218 of step 208 may be performed in any order or sequence. Similarly, the various noise filtering steps 234, 236, 238, 240 of step 230 may be performed in any order or sequence.
Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
All features disclosed in this specification (including the appended claims, abstract, and accompanying drawings) may be replaced by alternative features serving the same, equivalent, or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

Claims (19)

  1. PATENT CLAIMS1. A mobile robot system (100) for moving a mobile robot (190) to a destination location, the mobile robot system (100) 5 comprising: an image provider (112) configured to provide an image; an image noise filter (104) comprising a trained neural network system (120) configured to remove noise from the image; and a destination determiner (110) configured to determine whether the mobile robot (190) has arrived at the destination location based on the filtered image.
  2. 2. The mobile robot system (100) of claim 1, wherein the image noise filter (104) comprises a trained generative adversarial network neural network system (134) configured to remove noise from the image.
  3. 3. The mobile robot system (100) of any one of the preceding claims, wherein the image noise filter (104) comprises an image external noise filter (122) configured to remove environmental noise from the image.
  4. 4. The mobile robot system (100) of any one of the preceding claims, wherein the image noise filter (104) comprises an image external precipitation noise filter (124) configured to remove environmental noise caused by precipitation from the image.
  5. 5. The mobile robot system (100) of any one of the preceding claims, wherein the image noise filter (104) comprises an image external haze noise filter (126) configured to remove environmental noise caused by haze from the image.
  6. 6. The mobile robot system (100) of any one of the preceding claims, wherein the image noise filter (104) comprises an image external illumination noise filter (128) configured to remove environmental noise caused by illumination from the image.
  7. 7. The mobile robot system (100) of any one of the preceding claims, wherein the image noise filter (104) comprises an image temporary object noise filter (130) configured to remove environmental noise caused by a temporary object from the image.
  8. 8. The mobile robot system (100) of any one of the preceding claims, further comprising a network device (114) configured to receive the image from a network.
  9. 9. The mobile robot system (100) of any one of the preceding claims, further comprising an image capturing device (118) configured to capture the image.
  10. 10. The mobile robot system (100) of any one of the preceding 15 claims, further comprising an image feature matcher (108) configured to match interesting features of the filtered image with interesting features of another image.
  11. 11. A mobile robot (190) comprising the mobile robot system (100) of any one of the preceding claims.
  12. 12. A method of moving a mobile robot (190) to a destination location using the mobile robot system (100) of any one of the preceding claims, the method comprising the acts of: providing the image; filtering noise, by the trained neural network, from the image; and determining whether the mobile robot (190) has arrived at the destination based on the filtered image.
  13. 13. A mobile robot (190) comprising a mobile robot system (100) for moving the mobile robot (190) to a destination location, wherein the mobile robot system (100) comprises: an image provider (112) configured to provide an image; an image noise filter (104) comprising a trained neural network system (120) configured to remove noise from the image, wherein the image noise filter (104) comprises: a trained generative adversarial network neural network system (134) configured to remove noise from the image; an image external precipitation noise filter (124) configured to remove environmental noise caused by precipitation from the image; an image external haze noise filter (126) configured to remove environmental noise caused by haze from the image; an image external illumination noise filter (128) configured to remove environmental noise caused by illumination from the image; and an image temporary object noise filter (130) configured to remove environmental noise caused by a temporary object from the image; a destination determiner (110) configured to determine 15 whether the mobile robot (190) has arrived at the destination location based on the filtered image; a network device (114) configured to receive the image from a network; an image capturing device (118) configured to capture the 20 image; and an image feature matcher (108) configured to match interesting features of the filtered image with interesting features of another image.
  14. 14. A method of moving a mobile robot (190) to a destination location, the method comprising the acts of: providing an image; filtering noise, by a trained neural network, from the image; and determining whether the mobile robot (190) has arrived at the destination based on the filtered image.
  15. 15. A non-transitory computer-readable medium with instructions stored thereon, that when executed, perform a method of 35 moving a mobile robot (190) to a destination location comprising the acts of: providing an image; filtering noise, by a trained neural network, from the image; and determining whether the mobile robot (190) has arrived at the destination based on the filtered image.
  16. 16. A mobile robot system (100) for moving a mobile robot (190) to a destination location, the mobile robot system (100) comprising: an image provider (112) configured to provide a destination 10 image of the destination location; an image noise filter (104) configured to remove noise from the destination image of the destination location; and a destination determiner (110) configured to determine whether the mobile robot (190) has arrived at the destination location based on the filtered destination image.
  17. 17. A mobile robot (190) comprising a mobile robot system (100) for moving the mobile robot (190) to a destination location, wherein the mobile robot system (100) comprises: an image provider (112) configured to provide a destination image of the destination location; an image noise filter (104) configured to remove noise from the destination image of the destination location; wherein the image noise filter (104) comprises: a trained generative adversarial network neural network system (134) configured to remove noise from the destination image of the destination location; an image external precipitation noise filter (124) configured to remove environmental noise caused by precipitation 30 from the destination image of the destination location; an image external haze noise filter (126) configured to remove environmental noise caused by haze from the destination image of the destination location; an image external illumination noise filter (128) 35 configured to remove environmental noise caused by illumination from the destination image of the destination location; and an image temporary object noise filter (130) configured to remove environmental noise caused by a temporary object from the destination image of the destination location; a destination determiner (110) configured to determine 5 whether the mobile robot (190) has arrived at the destination location based on the filtered destination image; a network device (114) configured to receive the destination image from a network; and an image feature matcher (108) configured to match the 10 interesting features of the filtered destination image with interesting features of another image.
  18. 18. A method of moving a mobile robot (190) to a destination location, the method comprising the acts of: providing a destination image of the destination location; filtering noise from the destination image; and determining whether the mobile robot (190) has arrived at the destination based on the filtered destination image.
  19. 19. A non-transitory computer-readable medium with instructions stored thereon, that when executed, perform a method of moving a mobile robot (190) to a destination location comprising the acts of: providing a destination image of the destination location; filtering noise from the destination image; and determining whether the mobile robot (190) has arrived at the destination based on the filtered destination image.
GB2010472.5A 2020-07-08 2020-07-08 Mobile robot system and method for moving a mobile robot to a destination location Withdrawn GB2596834A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2010472.5A GB2596834A (en) 2020-07-08 2020-07-08 Mobile robot system and method for moving a mobile robot to a destination location

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2010472.5A GB2596834A (en) 2020-07-08 2020-07-08 Mobile robot system and method for moving a mobile robot to a destination location

Publications (2)

Publication Number Publication Date
GB202010472D0 GB202010472D0 (en) 2020-08-19
GB2596834A true GB2596834A (en) 2022-01-12

Family

ID=72050527

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2010472.5A Withdrawn GB2596834A (en) 2020-07-08 2020-07-08 Mobile robot system and method for moving a mobile robot to a destination location

Country Status (1)

Country Link
GB (1) GB2596834A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074234A1 (en) * 2018-09-05 2020-03-05 Vanderbilt University Noise-robust neural networks and methods thereof
WO2020058334A1 (en) * 2018-09-21 2020-03-26 Starship Technologies Oü Method and system for modifying image data captured by mobile robots
US20200118249A1 (en) * 2018-10-10 2020-04-16 Samsung Electronics Co., Ltd. Device configured to perform neural network operation and method of operating same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200074234A1 (en) * 2018-09-05 2020-03-05 Vanderbilt University Noise-robust neural networks and methods thereof
WO2020058334A1 (en) * 2018-09-21 2020-03-26 Starship Technologies Oü Method and system for modifying image data captured by mobile robots
US20200118249A1 (en) * 2018-10-10 2020-04-16 Samsung Electronics Co., Ltd. Device configured to perform neural network operation and method of operating same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Everrett M et al, "Planning beyond the sensing horizon using a learned context", IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China: 2019. *

Also Published As

Publication number Publication date
GB202010472D0 (en) 2020-08-19

Similar Documents

Publication Publication Date Title
Yang et al. Visual perception enabled industry intelligence: state of the art, challenges and prospects
CN107239728B (en) Unmanned aerial vehicle interaction device and method based on deep learning attitude estimation
CN110148196B (en) Image processing method and device and related equipment
CN104050679B (en) Illegal parking automatic evidence obtaining method
EP3499414B1 (en) Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification
Milford et al. Single camera vision-only SLAM on a suburban road network
US11669972B2 (en) Geometry-aware instance segmentation in stereo image capture processes
CN110569754A (en) Image target detection method, device, storage medium and equipment
Stone et al. Skyline-based localisation for aggressively manoeuvring robots using UV sensors and spherical harmonics
CN113160062A (en) Infrared image target detection method, device, equipment and storage medium
CN111401215A (en) Method and system for detecting multi-class targets
CN111831010A (en) Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN113515536A (en) Map updating method, device, equipment, server and storage medium
Yan et al. Human-object interaction recognition using multitask neural network
CN113378756B (en) Three-dimensional human body semantic segmentation method, terminal device and storage medium
Arthi et al. Object detection of autonomous vehicles under adverse weather conditions
GB2596834A (en) Mobile robot system and method for moving a mobile robot to a destination location
CN112655021A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116503567A (en) Intelligent modeling management system based on AI big data
Nie et al. Monocular vision based perception system for nighttime driving
Chen et al. Image detector based automatic 3D data labeling and training for vehicle detection on point cloud
CN111890358B (en) Binocular obstacle avoidance method and device, storage medium and electronic device
CN111160278A (en) Face texture structure data acquisition method based on single image sensor
Zhao et al. An RGBD data based vehicle detection algorithm for vehicle following systems
Chen et al. A real time vision-based smoking detection framework on edge

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)