WO2019114219A1 - 移动机器人及其控制方法和控制系统 - Google Patents
移动机器人及其控制方法和控制系统 Download PDFInfo
- Publication number
- WO2019114219A1 WO2019114219A1 PCT/CN2018/090655 CN2018090655W WO2019114219A1 WO 2019114219 A1 WO2019114219 A1 WO 2019114219A1 CN 2018090655 W CN2018090655 W CN 2018090655W WO 2019114219 A1 WO2019114219 A1 WO 2019114219A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- mobile robot
- flexible
- control
- behavior
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004140 cleaning Methods 0.000 claims abstract description 230
- 238000012545 processing Methods 0.000 claims abstract description 209
- 238000003860 storage Methods 0.000 claims abstract description 87
- 238000004804 winding Methods 0.000 claims description 193
- 230000033001 locomotion Effects 0.000 claims description 67
- 238000003384 imaging method Methods 0.000 claims description 41
- 238000010276 construction Methods 0.000 claims description 39
- 238000004891 communication Methods 0.000 claims description 13
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 abstract description 22
- 230000004807 localization Effects 0.000 abstract description 8
- 238000013507 mapping Methods 0.000 abstract description 8
- 230000006399 behavior Effects 0.000 description 85
- 238000012549 training Methods 0.000 description 70
- 239000004744 fabric Substances 0.000 description 45
- 229920000742 Cotton Polymers 0.000 description 25
- 230000008859 change Effects 0.000 description 20
- 230000007246 mechanism Effects 0.000 description 20
- 230000003287 optical effect Effects 0.000 description 20
- 238000007781 pre-processing Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 17
- 238000006073 displacement reaction Methods 0.000 description 13
- 238000001914 filtration Methods 0.000 description 12
- 239000007787 solid Substances 0.000 description 12
- 238000013461 design Methods 0.000 description 8
- 239000004568 cement Substances 0.000 description 6
- 239000002131 composite material Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 6
- 239000002023 wood Substances 0.000 description 6
- 239000004566 building material Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 239000000428 dust Substances 0.000 description 4
- 238000010408 sweeping Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 241000699670 Mus sp. Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000004397 blinking Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000012907 honey Nutrition 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010052428 Wound Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
Definitions
- the present application relates to the field of mobile robot technology, and in particular, to a mobile robot, a control method and a control system for the mobile robot.
- a mobile robot is a machine that automatically performs a specific job. It can accept human command, run pre-programmed procedures, or act on principles that are based on artificial intelligence techniques. These mobile robots can be used indoors or outdoors, and can be used in industrial, commercial or home applications. They can be used to replace security guards, replace welcoming or ordering staff, or replace people to clean the ground, and can also be used for family companionship and auxiliary office work.
- the obstacle detection techniques commonly used in existing mobile robots mainly include:
- a mechanical baffle connected to the electronic switch is installed in front of the bottom of the mobile robot. When an obstacle is encountered, the electronic switch is switched from the off state to the connected state, thereby detecting an obstacle in front. With this type of detection, it is necessary to detect the collision, and the user experience is poor. Moreover, for the flexible winding, it is easy to be moved by the mobile robot, and the impact force is too small to trigger the electronic switch, resulting in missed detection.
- Infrared ranging detection One or more infrared ranging sensors are installed on the mobile robot, and when the detected distance is less than the set threshold, it is judged that there is an obstacle in front. Infrared detection is greatly affected by ambient light, and there is also a problem of large blind areas in close proximity. Obstructions such as glass, light absorbing or all black materials are easily missed and have poor consistency. Due to the structure of the housing, the infrared sensor cannot be installed too low, so that some low obstacles, such as flexible windings, which are lower in height than infrared rays, are missed.
- Ultrasonic ranging detection One or more ultrasonic ranging sensors are installed on the mobile robot, and when the detected distance is less than the set threshold, it is judged that there is an obstacle ahead.
- ultrasonic waves are easily affected by factors such as ambient temperature, material of the reflector, and multipath propagation of sound waves.
- the ultrasonic sensor cannot be installed too low, so that some low obstacles, such as flexible windings, which are lower in height than the ultrasonic waves, are missed.
- the purpose of the present application is to disclose a mobile robot and a control method and control system thereof for improving the detection accuracy of a flexible wrap by a mobile robot.
- a control method of a mobile robot comprising the steps of: controlling the operation mode in a mobile robot
- the camera device performs photographing to acquire an image including the ground; recognizes at least one image including the ground photographed by the image capturing device, and controls behavior of the mobile robot when it is recognized that the flexible wrap is present in the image .
- a mobile robot comprising: a storage device storing a simultaneous positioning and map construction application and a behavior control application; an imaging device for acquiring an image of an operation environment in a working mode of the mobile robot; And connecting to the storage device and the image capturing device, configured to control the camera device to take an image to acquire an image including a ground in an operation mode of the mobile robot, and when identifying a flexible winding in the image
- a simultaneous positioning and map construction application and a behavior control application are invoked from the storage device to control behavior of the mobile robot; a mobile system coupled to the processing device for controlling commands issued based on the processing device And drive the mobile robot to move.
- the present application discloses, in a third aspect, a control system for a mobile robot, the mobile robot being configured with an imaging device, the control system comprising: a storage device storing a simultaneous positioning and map construction application and a behavior control application; the processing device, and The storage device is connected to the camera device, and is configured to control the camera device to perform shooting to acquire an image including a ground in an operation mode of the mobile robot, and to identify a flexible winding in the image.
- the simultaneous positioning and map construction application and the behavior control application are invoked in the storage device to control the behavior of the mobile robot.
- a cleaning robot comprising: an imaging device; a control system as described above; a mobile system coupled to the control system for driving the movement based on a control command issued by the control system The robot moves; a cleaning system is coupled to the control system for performing a cleaning operation on the ground while the mobile robot is moving.
- the present application in a fifth aspect, discloses a computer readable storage medium storing at least one program that, when executed by a processor, implements various steps in a control method of a mobile robot as described above.
- the mobile robot and the control method and control system thereof disclosed in the present application can control an image capturing device to perform image capturing to acquire an image including a ground, and identify at least one image captured by the ground, and a flexible winding is present in the recognized image. Control the behavior of the mobile robot. With the present application, the flexible winding can be effectively detected, and the behavior of the mobile robot can be controlled accordingly according to the detection result.
- FIG. 1 is a schematic structural view of an embodiment of a mobile robot of the present application.
- Fig. 2 is a diagram showing the relationship of the positional changes of the matching features in the two images acquired at the time before and after.
- FIG. 3 is a schematic view showing the structure of the cleaning robot of the present application in an embodiment.
- FIG. 4 is a schematic flow chart showing a control method of the mobile robot of the present application in an embodiment.
- FIG. 5 is a schematic diagram showing the refinement flow of FIG. 4.
- first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
- the first predetermined threshold may be referred to as a second predetermined threshold, and similarly, the second predetermined threshold may be referred to as a first predetermined threshold without departing from the scope of the various described embodiments.
- Both the first preset threshold and the preset threshold are describing a threshold, but unless the context is otherwise explicitly indicated, they are not the same preset threshold.
- a similar situation also includes a first volume and a second volume.
- the present application relates to the field of mobile robots, which are machine devices that automatically perform specific tasks. It can accept human command, run pre-programmed procedures, or act on principles that are based on artificial intelligence techniques. These mobile robots can be used indoors or outdoors, and can be used in industrial, commercial or home applications. They can be used to replace security guards, replace welcoming or ordering staff, or replace people to clean the ground, and can also be used for family companionship and auxiliary office work. Taking the most common cleaning robot as an example, the cleaning robot, also known as an automatic sweeping machine, a smart vacuum cleaner, etc., is a kind of smart household appliance, which can complete cleaning, vacuuming, wiping the ground and the like. Specifically, the cleaning robot can be controlled by a person (the operator holds the remote controller) or can perform the floor cleaning work in the room according to certain setting rules.
- these flexible windings may not only entangle the wheels of the cleaning robot, but the mobile robot cannot move and may cause the cleaning robot to fall to the ground in serious cases, causing a safety accident and may also entangle the cleaning.
- the cleaning system of the robot such as winding the cleaning brush, winding or clogging the vacuuming parts, makes the cleaning robot unable to perform cleaning work.
- the mobile robot used in other application scenarios is pushed.
- the present application provides a control system for the mobile robot.
- FIG. 1 a schematic structural view of an embodiment of a mobile robot of the present application is shown.
- the mobile robot includes a storage device 11, an imaging device 13, a processing device 15, and a mobile system 17.
- the storage device 11 stores a simultaneous positioning and map construction application and a behavior control application.
- the SLAM (Simultaneous Localization And Mapping) application is a basic application in the field of intelligent robots.
- the positioning technique of the mobile robot may include a process that allows the mobile robot to determine its position and orientation (or "attitude") relative to its perimeter, and the mobile robot that can frame its surrounding map may locate itself in the map to demonstrate autonomy.
- the problem can be described as when the mobile robot is in an unknown environment, is there a way for the mobile robot to gradually draw a complete map of the environment while determining which direction the mobile robot should travel, that is, to achieve intelligence needs to be completed Three tasks, the first is Localization, the second is Mapping, and the third is the subsequent Navigation.
- the behavior control application in the present application refers to controlling mobile robot navigation, guidance (or “attitude”) adjustment, and the like according to the set information or instructions.
- “Attitude” includes herein the position of the mobile robot within the moving space (eg, x, y coordinate locations), and the angle of the mobile robot relative to, for example, a base (eg, a wall) or a basic direction within the moving space. guide.
- a technique based on visual Simultaneous Localization and Mapping can utilize the image data based on the image sensor to the sensor. The error of the provided mobile information is compensated to provide a more accurate navigation capability for the cleaning robot.
- the behavior control application is a basic application in the field of intelligent robots, which is associated with the processing device 15 and the mobile system 17. With the behavior control application, the processing device 15 can be enabled to control the mobile system 17. In practical applications, the behavior control application can be combined with the aforementioned SLAM application, then the processing device 15 can issue control instructions to the mobile system 17 according to the positioning information and map information obtained by the SLAM application to enable the mobile system to perform corresponding behavior. "Behavior" includes the movement and posture of a mobile robot in this article.
- the storage device 11 also pre-stores standard physical features of at least one standard.
- the standard component may include a standard component designed based on at least one of an industry standard, a national standard, an international standard, and a custom standard.
- industry standards such as mechanical industry standard JB, building materials industry standard JC, etc.; national standards such as Chinese GB standard, German DIN standard, British BS standard, etc.; international standards such as international ISO standards; custom standards will be detailed later.
- the standard physical features may include a profile size, a standard structural relationship, etc., for example, standard physical features of the standard component include actual physical length, width, and height of the standard component, actual physical other size data of the corresponding standard in the standard component, and the like. .
- the spacing between the two holes on the power outlet Another example is the length and width of the power outlet.
- Another example is the length and width of the bottom plate or the length and width of the floor tiles. Also like the length and width of the carpet and the thickness.
- the storage device 11 includes, but is not limited to, a Read-Only Memory (ROM), a Random Access Memory (RAM), and a Nonvolatile RAM (NVRAM). For example, one or more disk storage devices, flash devices, or other non-volatile solid state storage devices.
- storage device 11 may also include a memory remote from one or more processors, such as network attached storage accessed via RF circuitry or external ports and a communication network (not shown), wherein the communication network It can be the Internet, one or more intranets, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), etc., or a suitable combination thereof.
- the memory controller can control access to the storage device by other components of the mobile robot, such as a central processing unit (CPU) and a peripheral interface.
- CPU central processing unit
- the camera device 13 is used to acquire an image of the operating environment in the operating mode of the mobile robot.
- the image pickup device 13 includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
- the power supply system of the image pickup device 13 can be controlled by the power supply system of the mobile robot.
- the image pickup device 13 starts capturing an image and supplies it to the processing device 15.
- the imaging device in the cleaning robot caches the captured indoor image in the storage device 11 in a preset video format and is acquired by the processing device 15.
- the imaging device 13 is for capturing an image during movement of the mobile robot.
- the camera device 13 can be disposed on the top surface of the mobile robot.
- the camera device in the cleaning robot is disposed on the middle, or edge, of the top surface of its housing.
- the field of view optical axis of the imaging device is ⁇ 30° with respect to the vertical.
- the angle of the optical axis of the camera of the cleaning robot with respect to the perpendicular is -30°, -29°, -28°, -27°, ..., -1°, 0°, 1°, 2°, ..., 29°, or 30°.
- the camera device 13 can be disposed at the intersection of the top surface and the side surface of the mobile robot.
- At least one recessed structure is provided at the intersection of the top surface and the side surface of the cleaning robot housing (the recessed structure may be disposed at the front end, the rear end or the side end of the housing), and the imaging device is disposed in the recessed structure .
- the angle ⁇ defined by the plane defined by the top surface of the housing is parallel to the horizontal plane, which is 61° to 85°, that is, the plane defined by the lens optical axis in the camera and the top surface of the housing
- the angle ⁇ is 61°, 62°, 63°, 64°, 65°, 66°, 67°, 68°, 69°, 70°, 71°, 72°, 73°, 74°, 75°, 76°, 77°, 78°, 79°, 80°, 81°, 82°, 83°, 84°, 85°.
- the lens in the camera is designed to be tilted forward to capture more environmental information.
- a camera with a forward tilt design can capture more of the environmental image in front of the cleaning robot than a camera with the lens facing up vertically, for example, cleaning a portion of the ground area in front of the robot.
- the angle between the optical axis and the top surface of the vertical line or the casing is an integer but not limited to an angle of 1°, and the robot is moved according to the actual situation.
- the design requirements, the accuracy of the angle can be higher, such as 0.1 °, 0.01 ° or more, etc., here is no endless example.
- Processing device 15 includes one or more processors. Processing device 15 is operatively coupled to read only memory, random access memory, and/or nonvolatile memory in storage device 11. Processing device 15 may execute instructions stored in read only memory, random access memory, and/or nonvolatile memory to perform operations in the robot, such as extracting features in an image and positioning in a map based on features, or acquiring The image is recognized and the image is recognized.
- the processor may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more digital signal processors (DSPs), one or more field programmable Field Programmable Gate Array (FPGA), or any combination thereof.
- the processing device 15 is also operatively coupled to an I/O port and an input structure that enables the mobile robot to interact with various other electronic devices that enable the user to interact with the computing device.
- the input structure can include buttons, keyboards, mice, trackpads, and the like.
- the other electronic device may be a mobile motor in the mobile device in the mobile robot, or a slave processor in the mobile robot dedicated to controlling the mobile device and the cleaning device, such as a Microcontroller Unit (MCU).
- MCU Microcontroller Unit
- the processing device 15 is coupled to the storage device 11 and the camera device 13 via data lines, respectively.
- the processing device 15 interacts with the storage device 11 via a data reading and writing technology
- the processing device 15 interacts with the imaging device 13 via an interface protocol.
- the data reading and writing technology includes but is not limited to: a high speed/low speed data interface protocol, a database read and write operation, and the like.
- the interface protocols include, but are not limited to, an HDMI interface protocol, a serial interface protocol, and the like.
- the processing device 15 is configured to control the imaging device to perform imaging to acquire an image including a ground in an operation mode of the mobile robot, and call the simultaneous positioning and the storage device when the flexible winding is recognized in the image
- the map construction application and the behavior control application control the behavior of the mobile robot.
- the processing device 15 is configured to acquire at least one image from the image captured by the camera device 13 and identify the at least one image to detect whether a flexible winding exists in the at least one image.
- the imaging device 13 can be controlled to capture an image including the ground.
- the "ground” can be specifically moved by the mobile robot in accordance with the walking path. ground.
- the imaging device 13 may be used to take an image to acquire an image of the ground located in front of the moving direction of the cleaning robot.
- the identification of the flexible wrap of the at least one image is performed by using a flexible wrap image classifier, that is, when identifying, the image to be recognized is input as an input to the flexible wrap image classifier.
- the recognition result is output.
- the flexible wrap image classifier is trained by a convolutional neural network.
- the Convolutional Neural Network (CNN) is an architecture of deep neural networks, which is closely related to image processing.
- the weight-sharing network structure of the convolutional neural network makes it more similar to the biological neural network. Such a structure not only reduces the complexity of the network model, but also reduces the number of weights. This network structure is used for translation, scaling, and tilting. Or other forms of deformation are highly invariant.
- the convolutional neural network can directly use the image as the input of the network, avoiding the complex feature extraction and data reconstruction process in the traditional recognition algorithm. Based on these advantages, it has a unique advantage in image recognition.
- the flexible wrap image classifier is trained by a convolutional neural network.
- the training can include, first, creating a training sample set, and acquiring an image containing a flexible wrap that conforms to a preset rule as a training sample. Thereafter, training is performed according to the set of training samples produced to obtain a flexible wrap image classifier.
- the image of the flexible winding conforming to the preset rule may be collected by itself, for example, searching for an image of the relevant flexible winding from the network or self-photographing the relevant flexible winding. An image of the object, and an image of a typical flexible winding that conforms to a preset rule is selected therefrom and used as a training sample.
- an image of some or all of the flexible windings may also be selected as a training sample from a standard library of various types of flexible windings, for example, from a standard library of different flexible windings.
- An image of some or all of the flexible windings, which are combined to form a training sample set, or at least one standard library selected from a standard library of different flexible windings, and some or all of the selected at least one standard library Determined as a training sample set.
- the image containing the flexible wrap as a training sample may be a simple image having a single background (for example, a single solid color in the background) or an image in a physical background.
- the image as the training sample may be a ground image including the flexible winding.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground according to the actual application environment, the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like.
- a corresponding set of training samples can be made, ie, a set of cable training samples for the corresponding cable can be made (eg, images of various types of cables presented in different types on different grounds), A set of rope training samples corresponding to the rope (for example, images of various types of ropes presented in different patterns on different grounds), a ribbon training sample set corresponding to the ribbon (for example, images of various types of ribbons presented in different patterns on different grounds) ), a set of cloth training samples corresponding to the cloth head (for example, images of different types of cloth heads on different grounds), and a cotton wool training sample set corresponding to the cotton wool (for example, various types of cotton wool are presented in different patterns on different grounds) Image), a set of plant vine training samples corresponding to plant vines (for example, images of various types of plant vines presented in different patterns on different grounds), and the like.
- a set of cable training samples for the corresponding cable can be made
- a set of rope training samples corresponding to the rope for example, images of various types of ropes presented in different
- the image pre-processing of the images in the training sample set may be performed before the training set of the training samples is trained.
- the image pre-processing includes, but is not limited to, cropping, compressing, grayscale processing, image filtering, and/or noise filtering processing, etc., of the images in the training sample set.
- the training can include: first, creating a training sample set, collecting an image containing a flexible wrap that conforms to a preset rule as a positive sample, and collecting without a flexible wrap or containing non-conforming rules.
- the image of the flexible wrap is used as a negative sample.
- training is performed according to the set of training samples to obtain a flexible wrap image classifier.
- an image of the flexible winding conforming to a preset rule may be collected by itself, for example, searching for a relevant flexible winding from the network.
- an image of the object or an image of the associated flexible wrap is taken by itself and an image of a typical flexible wrap that conforms to the preset rules is selected and used as a positive sample.
- an image of some or all of the flexible windings may be selected from a standard library of various types of flexible windings as a positive sample, for example, from a standard library of different flexible windings.
- the image containing the flexible wrap as a positive sample may be a simple image having a single background (for example, a single solid color in the background) or an image in a physical background. Since in the present application, the mobile robot controls the image pickup device 13 to perform image capturing to acquire an image including the ground, the image as a positive sample may be a ground image including a flexible wrap.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like. Therefore, for a specific flexible winding, a corresponding positive sample can be made, that is, a positive sample of the cable corresponding to the cable can be made (for example, images of various types of cables presented in different types on different grounds), corresponding A positive sample set of ropes for a rope (eg, images of various types of ropes presented in different types on different grounds), a positive sample set of ribbons corresponding to a ribbon (eg, images of various types of ribbons rendered in different patterns on different grounds) a positive sample set of the cloth head corresponding to the cloth head (for example, images of different types of cloth heads on different grounds), and a cotton wool positive sample set corresponding to the cotton wool (for example, various types of cotton wool are presented in different types on different grounds) Image), a set of plant vine positive samples corresponding to plant vines (for example, images of various types of cables presented in different
- a flexible wrap that does not contain a flexible wrap or contains a non-conforming rule may be collected by itself. Images, for example, searching the network for related images that do not contain flexible wraps or that contain flexible wraps that do not conform to preset rules or images that do not contain flexible wraps or that contain flexible wraps that do not conform to preset rules. And select an image that does not contain a flexible wrap or contains a flexible wrap that does not conform to the preset rules and uses it as a negative sample.
- some or all of the images may be selected from the existing standard libraries that do not contain flexible windings as negative samples, for example, from different standard libraries that do not contain flexible windings.
- Part or all of the images, which are combined to form a negative sample set, or at least one standard library is selected from different standard libraries that do not contain flexible windings, and some or all of the selected at least one standard library are determined as Negative sample.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like.
- a corresponding negative sample set can be made, ie, a cable negative sample set of corresponding cables can be made (eg, without cables or containing cables that do not conform to preset rules on different grounds)
- the upper image the negative sample set of the rope corresponding to the rope (for example, an image containing no rope or a rope containing a rope that does not conform to the preset rule on different grounds), a negative sample set of ribbons corresponding to the ribbon (for example, without a ribbon or containing An image of a ribbon that does not conform to a preset rule on a different ground), a negative sample set of a cloth head corresponding to the cloth head (for example, an image that does not contain a cloth head or a cloth head that does not conform to a preset rule on different grounds
- the image pre-processing of the images in the training sample set may be performed before the training set of the training samples is trained.
- the image pre-processing includes, but is not limited to, intercepting, compressing, grading, image filtering, and/or noise filtering processing of images in the training sample set.
- the image can be identified using the trained flexible wrap image classifier.
- the image to be recognized is input as an input to the flexible wrap image classifier, and the corresponding recognition result is output by the flexible wrap image classifier.
- identifying the image by using the flexible wrap image classifier may include at least the following steps: performing image preprocessing on the image to be recognized; performing feature extraction on the image preprocessed by the image; The features of the image are input to a flexible wrap image classifier to obtain a recognition result.
- the image pre-processing of the image to be recognized includes, but is not limited to, cropping, compression, gradation processing, thresholding processing, and the like of the image to be recognized.
- the pre-processing may further include image filtering, noise filtering processing, and the like.
- grayscale processing is performed on the image to be recognized to obtain a grayscale image, and the grayscale image after grayscale processing is thresholded (for example, the grayscale image is subjected to two After the value processing, it becomes a binarized image that reflects the overall and local features of the image, that is, a black and white image.
- Feature extraction of the image after image preprocessing includes, but is not limited to, extracting contour features, texture features, and the like of the image to be recognized.
- the aforementioned flexible wrap image classifier for performing flexible wrap identification may be pre-stored in the storage device 11.
- the flexible wrap image classifier is written into the storage device 11.
- the flexible wrap image classifier can be set with permissions to prohibit the end user from modifying it.
- the flexible wrap image classifier may also open some or all of the permissions, and may allow the end user to modify it (for example, modify or increase or decrease operations, etc.).
- the flexible wrap image classifier may also perform an update operation after the mobile robot is networked and establishes a communication connection with a corresponding vendor server or application server.
- the flexible wrap image may be stored in a cloud system that is in remote communication with the mobile robot, such that when performing image recognition, the processing device 15 may obtain at least one image from the image captured by the camera 13 And transmitting the at least one image to a cloud system remotely communicating with the mobile robot, identifying the at least one image by a flexible wrap image classifier in the cloud system and transmitting the recognition result to the mobile remotely robot.
- At least one image can be acquired from the image captured by the imaging device 13 by the processing device 15 and the at least one image can be identified by the flexible wrapping image classifier, whereby the at least one image can be detected Whether a flexible wrap is present and a specific category of flexible wrap present is obtained.
- the processing device 15 can also be used to invoke the simultaneous positioning and map construction application and behavior control application from the storage device 11 to detect the behavior of the mobile robot when it is recognized that a flexible wrap is present in the image.
- the processing device 15 is configured to invoke the positioning and map construction application to perform: acquiring positions of matching features in at least two images of the front and rear time, and according to the correspondence between the image coordinate system and the physical space coordinate system and the phase The position of the matching feature determines the position and attitude of the mobile robot.
- the storage device 11 also stores a correspondence relationship between the image coordinate system and the physical space coordinate system.
- the image coordinate system is an image coordinate system constructed based on image pixel points, and the two-dimensional coordinate parameter of each image pixel point in the image captured by the imaging device 13 can be described by the image coordinate system.
- the image coordinate system may be a Cartesian coordinate system or a polar coordinate system or the like.
- the physical space coordinate system and the coordinate system constructed based on each position in the actual two-dimensional or three-dimensional physical space, the physical space position thereof may be described according to the correspondence relationship between the preset image pixel unit and the unit length (or unit angle) In the physical space coordinate system.
- the physical space coordinate system may be a two-dimensional Cartesian coordinate system, a polar coordinate system, a spherical coordinate system, a three-dimensional rectangular coordinate system, or the like.
- the mobile robot further includes a motion sensing device (not shown) for acquiring movement information of the robot.
- the motion sensing device includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, and the like. During the movement of the robot, the mobile sensing device continuously detects the mobile information and provides it to the processing device.
- the displacement sensor, gyroscope, speed sensor, etc. can be integrated in one or more chips.
- the ranging sensor and the cliff sensor may be disposed on a body side of the robot.
- a ranging sensor in the cleaning robot is disposed at the edge of the housing;
- a cliff sensor in the cleaning robot is disposed at the bottom of the robot.
- the movement information that the processing device can acquire includes, but is not limited to, displacement information, angle information, distance information with obstacles, speed information, direction of travel information, and the like.
- the mobile robot further includes an initialization device (not shown), and the initialization device may be based on the position and the self of the matching feature in at least two images of the front and rear time.
- the corresponding information is constructed by the movement information acquired from the previous moment to the current moment.
- the initialization device may be a program module whose program portion is stored in the storage device and executed via a call of the processing device.
- the processing device invokes an initialization device to construct the correspondence.
- the initialization device acquires the movement information provided by the movement sensing device during the movement of the robot and acquires the respective images captured by the imaging device 13.
- the initialization device may acquire the movement information and at least two images for a short period of time during which the robot moves. For example, the initialization device acquires the movement information and at least two images when it is detected that the robot is moving in a straight line. For another example, the initialization device acquires the movement information and at least two images when it is detected that the robot is in a turning movement. Wherein, the interval time for acquiring at least two images during the turning movement may be shorter than the interval for acquiring at least two images when moving in a straight line.
- the initialization device identifies and matches features in the respective images and obtains image locations of the matching features in the respective images.
- Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
- the initialization device can acquire image locations of matching features in accordance with a tracking device (not shown).
- the tracking device is configured to track positions of at least two images containing the same feature in the before and after time.
- the initialization device then constructs the correspondence according to the image location and the physical spatial location provided by the movement information.
- the initialization device may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
- the initialization device may be a coordinate origin of the physical space coordinate system according to the physical space position of the image at the previous moment, and correspondingly match the coordinate origin with the position of the image in the image coordinate system, thereby Construct the correspondence between the two coordinate systems.
- the working process of the initialization device may be performed based on a user's instruction or transparent to the user.
- the execution process of the initialization device is initiated based on when the corresponding relationship is not stored in the storage device 11, or the corresponding relationship needs to be updated. There are no restrictions here.
- the correspondence may be saved in the storage device by a program, a database, or the like of the corresponding algorithm.
- software components stored in memory include an operating system, a communication module (or set of instructions), a contact/motion module (or set of instructions), a graphics module (or set of instructions), and an application (or set of instructions).
- the storage device also stores temporary data or persistent data including an image captured by the imaging device and a position and posture obtained by the processing device when performing positioning calculation.
- the processing device After the corresponding relationship is constructed, the processing device acquires matching features in the current time image and the previous time image, and determines the position and posture of the robot according to the correspondence relationship and the feature.
- the processing device 15 can acquire two images of the previous time t1 and the current time t2 according to a preset time interval or an image number interval, and identify and match features in the two images.
- the time interval may be selected between a few milliseconds and a few hundred milliseconds
- the image number interval may be selected between 0 frames and tens of frames.
- Such features include, but are not limited to, shape features, grayscale features, and the like.
- the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
- the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in the image including a preset grayscale range, and the like.
- the number of matching features is usually plural, for example, more than ten.
- the processing device 15 finds features that can be matched from the identified features based on the locations of the identified features in the respective images. For example, referring to FIG. 2, it is shown as a schematic diagram of the positional change relationship of the matching features in the two images acquired at time t1 and time t2.
- the processing device 15 determines that the images P1 include the features a1 and a2, the image P2 includes the features b1, b2, and b3, and the features a1 and b1 and b2 all belong to the same feature, and the features a2 and b3 Having the same feature, the processing device 15 may first determine that the feature a1 in the image P1 is located on the left side of the feature a2 and the pitch is a d1 pixel point; and also determines that the feature b1 in the image P2 is located on the left side of the feature b3 with a pitch of d1 'Pixel points, and feature b2 is located to the right of feature b3 and has a pitch of d2' pixels.
- the processing device 15 according to the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, and the pixel pitch of the features b2 and b3, respectively, and the feature a1
- the pixel pitch of a2 is matched, so that the feature a1 in the image P1 matches the feature b1 in the image P2, and the feature a2 matches the feature b3.
- the processing device 15 will match the selected features in order to locate the position and attitude of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
- the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
- the processing device 15 may determine image position offset information of multiple features in the two images or determine physical position offset information of the plurality of features in the physical space according to the correspondence relationship, and integrate the obtained A position offset information is used to calculate the relative position and attitude of the robot from time t1 to time t2. For example, by the coordinate conversion, the processing device 15 obtains the position and posture of the robot from the time t1 of the captured image P1 to the time t2 of the captured image P2 as follows: m length is moved on the ground and n degrees are rotated to the left. Taking the cleaning robot as an example, when the cleaning robot has established a map, the position and posture obtained according to the processing device 15 can help the cleaning robot determine whether it is on the navigation route. When the cleaning robot does not establish a map, the position and posture obtained according to the processing device 15 can help the cleaning robot determine the relative displacement and the relative rotation angle, and thereby map the data.
- the processing device 15 can process, analyze, and understand the image captured by the imaging device 13 by using an image recognition method based on a convolutional neural network, an image recognition method based on a wavelet moment, or the like to identify targets of various modes. Object.
- the processing device can also seek similar image targets by analyzing the correspondence, similarity and consistency of image content, features, structures, relationships, textures, and gradations.
- the objects in the image taken by the camera device generally include, for example, a wall, a table, a sofa, a wardrobe, a television, a power socket, Network cable sockets, etc.
- the image pickup device 13 supplies the image to the processing device 15 after capturing an image in the navigation operation environment of the cleaning robot, and the processing device 15 recognizes the graphic of the physical object in the captured image by image recognition.
- the graphic of the physical object can be characterized by features such as grayscale of the real object, contour of the physical object, and the like.
- the graphic of the physical object is not limited to the external geometric figure of the physical object, and may include other graphics presented on the physical object, such as a two-hole socket on the power socket, a five-hole socket, a square socket on the network cable socket, and the like.
- the five-hole socket of the power socket and the square socket of the network cable socket can be used to distinguish.
- the object of the cleaning robot's image capturing device in the indoor image can include a power socket or a network cable socket, since the power socket and the network cable socket are designed according to the GB standard, they are not different depending on the environment.
- the standard physical characteristics of the standard parts may include the length, width, and height of the power socket, and the structural relationship of the five-hole socket on the power socket.
- the graphics of the standard and the standard physical features of the standard may be preset and pre-stored using the storage device of the robot.
- the manner in which the standard physical features of the standard are obtained includes reading the preset standard physical features from the storage device of the robot.
- the standard component may include a standard component designed based on at least one of an industry standard, a national standard, an international standard, and a custom standard.
- the standard physical features may include a profile size, a standard structural relationship, etc., for example, standard physical features of the standard component include actual physical length, width, and height of the standard component, actual physical other size data of the corresponding standard in the standard component, and the like. .
- the spacing between the two holes on the power outlet is another example.
- the length and width of the power outlet is another example.
- the processing device 15 analyzes the similarity and consistency by correspondence of image content, features, structures, relationships, textures, and gradations. Determining whether the identified at least one graphic corresponds to the stored pattern of the standard piece, and acquiring the standard physical feature of the standard piece when the identified at least one figure corresponds to the stored pattern of the standard piece.
- the at least one graphic corresponding to the graphic of the stored standard piece is referred to as a standard figure.
- the storage device 11 stores a standard power socket pattern
- the processing device 15 determines the identification by the correspondence of image tolerance, feature, structure, relationship, texture, gray scale, etc., similarity and consistency. Whether at least one of the graphics corresponds to the stored pattern of the power outlet, and when the identified at least one graphic corresponds to the stored pattern of the power outlet, the standard physical characteristics of the power outlet are obtained.
- the processing device 15 can calculate the existence of the image.
- the processing device 15 can also obtain the size of the flexible wrap by using the spatial positional relationship of the standard measurement of the socket (such as the length and width of the frame of the socket or the spacing of the jacks in the socket, etc.) (for example, the length and thickness of the flexible wrap) ) and the area covered by the flexible winding.
- the standard measurement of the socket such as the length and width of the frame of the socket or the spacing of the jacks in the socket, etc.
- the length and thickness of the flexible wrap for example, the length and thickness of the flexible wrap
- the processing device 15 invokes a simultaneous location and map construction application and a behavior control application to control the behavior of the mobile robot.
- the behavior control application refers to controlling mobile robot navigation, posture adjustment, and the like according to the set information or instructions.
- the processing device 15 can be enabled to control the mobile system 17 of the mobile robot.
- the mobile system 17 is coupled to the processing device 15 for driving the mobile robot to move based on control commands issued by the processing device 15.
- the mobile system 17 can include a running mechanism and a driving mechanism, wherein the running mechanism can be disposed at a bottom of the mobile robot, and the driving mechanism is built in the housing of the mobile robot.
- the traveling mechanism may adopt a walking wheel mode.
- the traveling mechanism may include, for example, at least two universal walking wheels, and the at least two universal walking wheels realize forward and backward, Steering, and rotation, etc.
- the running mechanism may, for example, comprise a combination of two straight traveling wheels and at least one auxiliary steering wheel, wherein the two straight traveling wheels are not involved in the event that the at least one auxiliary steering wheel is not involved Mainly used for forward and backward, and in the case where the at least one auxiliary steering wheel participates and cooperates with the two straight traveling wheels, movement such as steering and rotation can be achieved.
- the drive mechanism can be, for example, a drive motor with which the travel wheels in the travel mechanism can be driven for movement.
- the driving motor may be, for example, a reversible driving motor, and a shifting mechanism may be further disposed between the driving motor and the axle of the traveling wheel.
- the processing device 15 when the processing device 15 recognizes the flexible wrap in at least one image captured by the camera 13, the simultaneous positioning and map construction application and the behavior control application are invoked from the storage device 11 to act on the mobile robot. Take control.
- the processing device 15 may adopt different behavior control modes for the mobile robot, wherein the behavior of the mobile robot may include at least but not limited to: mobile robot movement and mobile robot Gesture.
- the manner in which the processing device 15 invokes the simultaneous location and map construction application and the behavior control application to control the behavior of the mobile robot can include issuing control commands to the mobile system 17 to control based on the information of the flexible wrap.
- the mobile robot moves along the original navigation route and passes over the flexible wrap.
- the processing device 15 recognizes the flexible winding, combined with information such as its category, size, and/or position, and judges that the flexible winding does not interfere with the normal operation of the mobile robot, the processing device 15 moves to the mobile device.
- System 17 issues control commands to control the mobile robot to move along the original navigation path and past the flexible wrap.
- the processing device 15 controls the cleaning robot to move along the original navigation route and over the flexible winding.
- the control cleaning robot moves according to the original navigation route to control the cleaning robot to move according to the original navigation route at the original moving speed and the original posture.
- control cleaning robot moves according to the original navigation route to control the cleaning robot to change the original moving speed and moves in the original posture according to the original navigation route.
- changing the original moving speed may include increasing the moving speed and reducing the movement. speed.
- controlling the cleaning robot to move according to the original navigation route to control the cleaning robot to change the moving speed and change the posture and move according to the original navigation route in the original posture, where changing the original moving speed may include increasing the moving speed and reducing Small moving speed.
- the manner in which the processing device 15 invokes the simultaneous location and map construction application and the behavior control application to control the behavior of the mobile robot can include issuing control commands to the mobile system 17 to control based on the information of the flexible wrap.
- the mobile robot modifies the original navigation route and passes over the flexible flexible wrap. Specifically, if the processing device 15 recognizes the flexible winding, combined with information such as its category, size, and/or position, it is determined that the flexible winding is likely to interfere with the normal operation of the mobile robot under the original navigation route but by changing the original In the event that the navigation route can be avoided, the processing device 15 issues control commands to the mobile system 17 to control the mobile robot to modify the original navigation route movement and past the flexible wrap.
- the placement of the flexible winding in the image may interfere with the normal operation of the mobile robot (for example, the cable, the rope or the wire head is placed in the length direction and the original navigation route is basically Consistent, or the cable, rope, thread, ribbon, etc.
- the processing device 15 can control the cleaning robot to modify the original navigation route and move over The flexible winding, for example, modifying the original navigation route so that the modified navigation route is perpendicular to the direction in which the flexible winding is placed, so that the cleaning robot can pass over the flexible winding, or modify the original navigation route, so that the cleaning robot passes the flexibility When wrapping, the flexible wrap is not in the position of the tube or the suction inlet of the cleaning robot in the new navigation wheel.
- the moving speed of the cleaning robot may have different implementation manners, and the moving speed may be kept unchanged, or the moving speed may be increased or decreased. Small moving speed.
- the manner in which the processing device 15 invokes the simultaneous location and map construction application and the behavior control application to control the behavior of the mobile robot can include issuing control commands to the mobile system 17 to control based on the information of the flexible wrap.
- the mobile robot modifies the original navigation route to avoid the flexible wrap. Specifically, if the processing device 15 recognizes the flexible winding, combined with information such as its category, size, and/or position, and determines that the flexible winding is likely to interfere with the normal operation of the mobile robot, the processing device 15 The mobile system 17 issues control commands to control the mobile robot to modify the original navigation route to avoid the flexible wrap.
- the processing device 15 can control the cleaning robot to modify the original navigation path movement to avoid the flexible winding.
- the manner in which the processing device 15 invokes the simultaneous location and map construction application and the behavior control application to control the behavior of the mobile robot can include issuing control commands to the mobile system 17 to control based on the information of the flexible wrap.
- the manner in which the processing device 15 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the mobile robot may include ignoring the flexible wrap based on the information of the flexible wrap, A control command is issued to the mobile system 17 to control the mobile robot to move in accordance with the original navigation route. Specifically, if the processing device 15 recognizes the flexible winding, combined with information such as its category, size, and/or position, and determines that the flexible winding is not on the original navigation route, the processing device 15 issues control to the mobile system 17. Commands to control the mobile robot to move according to the original navigation route.
- the processing device 15 can control the cleaning robot to move according to the original navigation route.
- the flexible wrap is ignored.
- the mobile robot of the present application may further include an alarm device (not shown) coupled to the processing device 15 for issuing an alarm message when the processing device 15 recognizes that a flexible wrap is present in the image. Specifically, if the processing device 15 recognizes that there is a flexible winding in the image, the processing device 15 issues a control command to the alarm device to control the quoting device to issue an alarm message.
- the alarm device and the alarm information it issues may be in various embodiments or a combination thereof.
- the alarm device may be, for example, a honey device that sounds an alarm when the processing device 15 recognizes that there is a flexible wrap in the image.
- the alarm device may be, for example, an alarm light that emits an alarm light when the processing device 15 recognizes that there is a flexible wrap in the image, the alarm light may be a constant light or blinking Light.
- the alarm device may be, for example, an information transmitting device that communicates to a network connected user terminal (eg, a smart phone) or indoors when the processing device 15 recognizes that there is a flexible wrap in the image Intelligent terminals (such as smart speakers, smart light bulbs, smart displays, etc.) send alarm messages. With the alarm device, information discovering the flexible wrap can be immediately issued for subsequent removal of the flexible wrap by the operator to remove the obstacle.
- the mobile robot of the present application in the working mode of the mobile robot, acquires an image including the ground by photographing, recognizes the image, and invokes simultaneous positioning and map construction applications and behaviors when it is recognized that there is a flexible winding in the image.
- the application is controlled to control the behavior of the mobile robot. With the mobile robot of the present application, the flexible winding can be effectively detected, and the behavior of the mobile robot can be controlled accordingly according to the detection result.
- the cleaning robot includes an imaging device 21, a control system 23, a moving system 25, and a cleaning system 27, wherein the control system 23 further includes a storage device 230 and a processing device 232.
- the camera device 21 is used to acquire an image of the operating environment in the operating mode of the cleaning robot.
- the image pickup device 21 includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
- the power supply system of the camera device 21 can be controlled by the power supply system of the cleaning robot.
- the camera device 21 starts capturing images and supplies them to the processing device 232.
- the camera device in the cleaning robot caches the captured indoor image in the storage device 230 in a preset video format and is acquired by the processing device 232.
- the image pickup device 21 is for taking an image during movement of the cleaning robot.
- the camera device 21 may be disposed on the top surface of the cleaning robot.
- the camera device in the cleaning robot is disposed on the middle, or edge, of the top surface of its housing.
- the field of view optical axis of the imaging device is ⁇ 30° with respect to the vertical.
- the angle of the optical axis of the camera of the cleaning robot with respect to the perpendicular is -30°, -29°, -28°, -27°, ..., -1°, 0°, 1°, 2°, ..., 29°, or 30°.
- the camera device 21 can be disposed at the intersection of the top surface and the side surface of the cleaning robot.
- At least one recessed structure is provided at the intersection of the top surface and the side surface of the cleaning robot housing (the recessed structure may be disposed at the front end, the rear end or the side end of the housing), and the imaging device is disposed in the recessed structure .
- the angle ⁇ defined by the plane defined by the top surface of the housing is parallel to the horizontal plane, which is 61° to 85°, that is, the plane defined by the lens optical axis in the camera and the top surface of the housing
- the angle ⁇ is 61°, 62°, 63°, 64°, 65°, 66°, 67°, 68°, 69°, 70°, 71°, 72°, 73°, 74°, 75°, 76°, 77°, 78°, 79°, 80°, 81°, 82°, 83°, 84°, 85°.
- the lens in the camera is designed to be tilted forward to capture more environmental information.
- a camera with a forward tilt design can capture more of the environmental image in front of the cleaning robot than a camera with the lens facing up vertically, for example, cleaning a portion of the ground area in front of the robot.
- the angle between the optical axis and the top surface of the vertical line or the casing is an integer, but is not limited to the range of the angle accuracy of 1 °, according to the actual cleaning robot.
- the design requirements, the accuracy of the angle can be higher, such as 0.1 °, 0.01 ° or more, etc., here is no endless example.
- the storage device 230 stores a simultaneous positioning and map construction application and a behavior control application.
- the SLAM (Simultaneous Localization And Mapping) application is a basic application in the field of intelligent robots.
- the positioning technique of the cleaning robot may include a process that allows the cleaning robot to determine its position and orientation (or "attitude") relative to its perimeter, and the cleaning robot that can frame its surrounding map can position itself in the map to demonstrate autonomy.
- the problem can be described as when the cleaning robot is in an unknown environment, is there a way for the cleaning robot to gradually draw a complete map of the environment while deciding which direction the cleaning robot should go, that is, to achieve intelligence needs to be completed Three tasks, the first is Localization, the second is Mapping, and the third is the subsequent Navigation.
- the behavior control application in the present application refers to controlling cleaning robot navigation, guiding (or "attitude") adjustment, and the like according to the set information or instructions.
- “Attitude” includes herein the position of the cleaning robot within the moving space (eg, x, y coordinate locations), and the angle of the cleaning robot relative to, for example, a base (eg, a wall) or a basic direction within the moving space. guide.
- a technique based on visual Simultaneous Localization and Mapping can utilize the image data based on the image sensor to the sensor. The error of the provided mobile information is compensated to provide a more accurate navigation capability for the cleaning robot.
- the behavior control application is a basic application in the field of intelligent robots that is associated with processing device 232 and mobile system 25. With the behavior control application, the processing device 232 can be enabled to control the mobile system 25. In practical applications, the behavior control application can be combined with the aforementioned SLAM application. Then, the processing device 232 can issue control instructions to the mobile system 25 according to the positioning information and map information obtained by the SLAM application to cause the mobile system 25 to perform corresponding operations.
- the behavior of. “Behavior” in this article includes the movement and posture of the cleaning robot.
- the standard component may include a standard component designed based on at least one of an industry standard, a national standard, an international standard, and a custom standard.
- industry standards such as mechanical industry standard JB, building materials industry standard JC, etc.; national standards such as China GB standard, German DIN standard, British BS standard, etc.; international standards such as international ISO standards; custom standards will be detailed later.
- the standard physical features may include a profile size, a standard structural relationship, etc., for example, standard physical features of the standard component include actual physical length, width, and height of the standard component, actual physical other size data of the corresponding standard in the standard component, and the like. .
- the spacing between the two holes on the power outlet Another example is the length and width of the power outlet.
- Another example is the length and width of the bottom plate or the length and width of the floor tiles. Also like the length and width of the carpet and the thickness.
- the storage device 230 includes, but is not limited to, a Read-Only Memory (ROM), a Random Access Memory (RAM), and a Nonvolatile RAM (NVRAM). For example, one or more disk storage devices, flash devices, or other non-volatile solid state storage devices.
- storage device 230 can also include a memory remote from one or more processors, such as network attached storage accessed via RF circuitry or external ports and a communication network (not shown), wherein the communication network It can be the Internet, one or more intranets, a local area network (LAN), a wide area network (WLAN), a storage area network (SAN), etc., or a suitable combination thereof.
- the memory controller can control access to the storage device by other components of the cleaning robot, such as a central processing unit (CPU) and a peripheral interface.
- CPU central processing unit
- Processing device 232 includes one or more processors. Processing device 232 is operatively coupled to read only memory, random access memory, and/or nonvolatile memory in storage device 230. Processing device 232 can execute instructions stored in read only memory, random access memory, and/or nonvolatile memory to perform operations in the robot, such as extracting features in an image and positioning in a map based on features, or acquiring The image is recognized and the image is recognized.
- the processor may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more digital signal processors (DSPs), one or more field programmable Field Programmable Gate Array (FPGA), or any combination thereof.
- Processing device 232 is also operatively coupled to an I/O port and an input structure that enables the cleaning robot to interact with various other electronic devices that enable the user to interact with the computing device.
- the input structure can include buttons, keyboards, mice, trackpads, and the like.
- the other electronic device may be a mobile motor in the mobile device in the cleaning robot, or a slave processor in the cleaning robot dedicated to controlling the mobile device and the cleaning device, such as a Microcontroller Unit (MCU).
- MCU Microcontroller Unit
- processing device 232 connects storage device 230 and camera device 21, respectively, via data lines.
- Processing device 232 interacts with storage device 230 via data read and write techniques, and processing device 232 interacts with camera device 21 via an interface protocol.
- the data reading and writing technology includes but is not limited to: a high speed/low speed data interface protocol, a database read and write operation, and the like.
- the interface protocols include, but are not limited to, an HDMI interface protocol, a serial interface protocol, and the like.
- the processing device 232 is configured to control the camera to perform image capture to acquire an image including the ground in an operation mode of the cleaning robot, and call the simultaneous positioning from the storage device when the flexible winding is recognized in the image
- the map construction application and the behavior control application control the behavior of the cleaning robot.
- the processing device 232 is configured to acquire at least one image from the image captured by the camera device 21, and identify the at least one image to detect whether a flexible winding exists in the at least one image.
- the camera 21 can be controlled to take an image to obtain an image including the ground.
- the “ground” here can be specifically moved by the cleaning robot according to the walking path. ground.
- the identification of the flexible wrap of the at least one image is performed by using a flexible wrap image classifier, that is, when identifying, the image to be recognized is input as an input to the flexible wrap image classifier.
- the recognition result is output.
- the flexible wrap image classifier is trained by a convolutional neural network.
- the training can include, first, creating a training sample set, and acquiring an image containing a flexible wrap that conforms to a preset rule as a training sample. Thereafter, training is performed according to the set of training samples produced to obtain a flexible wrap image classifier.
- the image of the flexible winding conforming to the preset rule may be collected by itself, for example, searching for an image of the relevant flexible winding from the network or self-photographing the relevant flexible winding. An image of the object, and an image of a typical flexible winding that conforms to a preset rule is selected therefrom and used as a training sample.
- an image of some or all of the flexible windings may also be selected as a training sample from a standard library of various types of flexible windings, for example, from a standard library of different flexible windings.
- An image of some or all of the flexible windings, which are combined to form a training sample set, or at least one standard library selected from a standard library of different flexible windings, and some or all of the selected at least one standard library Determined as a training sample set.
- the image containing the flexible wrap as a training sample may be a simple image having a single background (for example, a single solid color in the background) or an image in a physical background.
- the cleaning robot controls the imaging device 21 to perform imaging to obtain an image including the ground
- the image as the training sample may be a ground image including the flexible winding.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like.
- a corresponding set of training samples can be made, ie, a set of cable training samples for the corresponding cable can be made (eg, images of various types of cables presented in different types on different grounds), A set of rope training samples corresponding to the rope (for example, images of various types of ropes presented in different patterns on different grounds), a ribbon training sample set corresponding to the ribbon (for example, images of various types of ribbons presented in different patterns on different grounds) ), a set of cloth training samples corresponding to the cloth head (for example, images of different types of cloth heads on different grounds), and a cotton wool training sample set corresponding to the cotton wool (for example, various types of cotton wool are presented in different patterns on different grounds) Image), a set of plant vine training samples corresponding to plant vines (for example, images of various types of plant vines presented in different patterns on different grounds), and the like.
- a set of cable training samples for the corresponding cable can be made
- a set of rope training samples corresponding to the rope for example, images of various types of ropes presented in different
- the image pre-processing of the images in the training sample set may be performed before the training set of the training samples is trained.
- the image pre-processing includes, but is not limited to, cropping, compressing, grayscale processing, image filtering, and/or noise filtering processing, etc., of the images in the training sample set.
- the training can include: first, creating a training sample set, collecting an image containing a flexible wrap that conforms to a preset rule as a positive sample, and collecting without a flexible wrap or containing non-conforming rules.
- the image of the flexible wrap is used as a negative sample.
- training is performed according to the set of training samples to obtain a flexible wrap image classifier.
- an image of the flexible winding conforming to a preset rule may be collected by itself, for example, searching for a relevant flexible winding from the network.
- an image of the object or an image of the associated flexible wrap is taken by itself and an image of a typical flexible wrap that conforms to the preset rules is selected and used as a positive sample.
- an image of some or all of the flexible windings may be selected from a standard library of various types of flexible windings as a positive sample, for example, from a standard library of different flexible windings.
- the image containing the flexible wrap as a positive sample may be a simple image having a single background (for example, a single solid color in the background) or an image in a physical background.
- the cleaning robot controls the image pickup device 21 to perform image capturing to acquire an image including the ground
- the image as a positive sample may be a ground image including the flexible winding.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like. Therefore, for a specific flexible winding, a corresponding positive sample can be made, that is, a positive sample of the cable corresponding to the cable can be made (for example, images of various types of cables presented in different types on different grounds), corresponding A positive sample set of ropes for a rope (eg, images of various types of ropes presented in different types on different grounds), a positive sample set of ribbons corresponding to a ribbon (eg, images of various types of ribbons rendered in different patterns on different grounds) a positive sample set of the cloth head corresponding to the cloth head (for example, images of different types of cloth heads on different grounds), and a cotton wool positive sample set corresponding to the cotton wool (for example, various types of cotton wool are presented in different types on different grounds) Image), a set of plant vine positive samples corresponding to plant vines (for example, images of various types of cables presented in different
- a flexible wrap that does not contain a flexible wrap or contains a non-conforming rule may be collected by itself. Images, for example, searching the network for related images that do not contain flexible wraps or that contain flexible wraps that do not conform to preset rules or images that do not contain flexible wraps or that contain flexible wraps that do not conform to preset rules. And select an image that does not contain a flexible wrap or contains a flexible wrap that does not conform to the preset rules and uses it as a negative sample.
- some or all of the images may be selected from the existing standard libraries that do not contain flexible windings as negative samples, for example, from different standard libraries that do not contain flexible windings.
- Part or all of the images, which are combined to form a negative sample set, or at least one standard library is selected from different standard libraries that do not contain flexible windings, and some or all of the selected at least one standard library are determined as Negative sample.
- the flexible windings include, but are not limited to, the following categories: cables, ropes, ribbons, laces, towels, cloth heads, cotton wool, plant vines, and the like.
- the ground includes but not limited to the following categories: cement floor, painted floor, floor for laying composite floor, floor for laying solid wood floor, floor for laying carpet, and the like.
- a corresponding negative sample set can be made, ie, a cable negative sample set of corresponding cables can be made (eg, without cables or containing cables that do not conform to preset rules on different grounds)
- the upper image the negative sample set of the rope corresponding to the rope (for example, an image containing no rope or a rope containing a rope that does not conform to the preset rule on different grounds), a negative sample set of ribbons corresponding to the ribbon (for example, without a ribbon or containing An image of a ribbon that does not conform to a preset rule on a different ground), a negative sample set of a cloth head corresponding to the cloth head (for example, an image that does not contain a cloth head or a cloth head that does not conform to a preset rule on different grounds
- the image pre-processing of the images in the training sample set may be performed before the training set of the training samples is trained.
- the image pre-processing includes, but is not limited to, intercepting, compressing, grading, image filtering, and/or noise filtering processing of images in the training sample set.
- the image can be identified using the trained flexible wrap image classifier.
- the image to be recognized is input as an input to the flexible wrap image classifier, and the corresponding recognition result is output by the flexible wrap image classifier.
- identifying the image by using the flexible wrap image classifier may include at least the following steps: performing image preprocessing on the image to be recognized; performing feature extraction on the image preprocessed by the image; The features of the image are input to a flexible wrap image classifier to obtain a recognition result.
- the image pre-processing of the image to be recognized includes, but is not limited to, cropping, compression, gradation processing, thresholding processing, and the like of the image to be recognized.
- the pre-processing may further include image filtering, noise filtering processing, and the like.
- grayscale processing is performed on the image to be recognized to obtain a grayscale image, and the grayscale image after grayscale processing is thresholded (for example, the grayscale image is subjected to two After the value processing, it becomes a binarized image that reflects the overall and local features of the image, that is, a black and white image.
- Feature extraction of the image after image preprocessing includes, but is not limited to, extracting contour features, texture features, and the like of the image to be recognized.
- the aforementioned flexible wrap image classifier for performing flexible wrap identification may be pre-stored in storage device 230.
- the flexible wrap image classifier Before the cleaning robot is sold to the end user (eg, before the cleaning robot is manufactured, or before the cleaning robot is delivered to each point of sale, or the cleaning robot is sold at the point of sale) Before the end user, the flexible wrap image classifier is written into the storage device 230.
- the flexible wrap image classifier can be set with permissions to prohibit the end user from modifying it. Of course, it is not limited thereto.
- the flexible wrap image classifier may also open some or all of the permissions, and may allow the end user to modify it (for example, modify or increase or decrease operations, etc.).
- the flexible wrap image classifier may also perform an update operation after the cleaning robot is networked and establishes a communication connection with a corresponding vendor server or application server.
- the flexible wrap image may be stored in a cloud system that is in remote communication with the cleaning robot, such that when image recognition is performed, the processing device 232 may obtain at least one image from the image captured by the camera 21. And transmitting the at least one image to a cloud system remotely communicating with the cleaning robot, identifying the at least one image by a flexible wrap image classifier in the cloud system and transmitting the recognition result to the cleaning remotely robot.
- At least one image can be acquired from the image captured by the imaging device 21 and the at least one image can be identified by the flexible wrapping image classifier by using the processing device 232, whereby the at least one image can be detected Whether a flexible wrap is present and a specific category of flexible wrap present is obtained.
- the processing device 232 can also be used to invoke the simultaneous positioning and map construction application and behavior control application from the storage device 230 to detect the behavior of the cleaning robot when a flexible wrap is present in the image.
- the processing device 232 is configured to invoke the positioning and map construction application to perform: acquiring positions of matching features in at least two images of the front and rear time, and according to the correspondence between the image coordinate system and the physical space coordinate system and the phase The position of the matching feature determines the position and attitude of the cleaning robot.
- the storage device 230 also stores a correspondence relationship between the image coordinate system and the physical space coordinate system.
- the image coordinate system is an image coordinate system constructed based on image pixel points, and the two-dimensional coordinate parameter of each image pixel point in the image captured by the imaging device 21 can be described by the image coordinate system.
- the image coordinate system may be a Cartesian coordinate system or a polar coordinate system or the like.
- the physical space coordinate system and the coordinate system constructed based on each position in the actual two-dimensional or three-dimensional physical space, the physical space position thereof may be described according to the correspondence relationship between the preset image pixel unit and the unit length (or unit angle) In the physical space coordinate system.
- the physical space coordinate system may be a two-dimensional Cartesian coordinate system, a polar coordinate system, a spherical coordinate system, a three-dimensional rectangular coordinate system, or the like.
- the correspondence may be pre-stored in the storage device before leaving the factory. However, for a cleaning robot having a high ground complexity of a scene, such as a cleaning robot, the correspondence can be obtained by performing field testing at the site used and stored in the storage device 230.
- the cleaning robot further includes a motion sensing device (not shown) for acquiring movement information of the robot.
- the motion sensing device includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, and the like. During the movement of the robot, the mobile sensing device continuously detects the mobile information and provides it to the processing device.
- the displacement sensor, gyroscope, speed sensor, etc. can be integrated in one or more chips.
- the ranging sensor and the cliff sensor may be disposed on a body side of the robot.
- a ranging sensor in the cleaning robot is disposed at the edge of the housing;
- a cliff sensor in the cleaning robot is disposed at the bottom of the robot.
- the movement information that the processing device can acquire includes, but is not limited to, displacement information, angle information, distance information with obstacles, speed information, direction of travel information, and the like.
- the cleaning robot further includes an initialization device (not shown), which may be based on the position and self-matching of the matching features in at least two images of the before and after moments
- the corresponding information is constructed by the movement information acquired from the previous moment to the current moment.
- the initialization device may be a program module whose program portion is stored in the storage device and executed via a call of the processing device.
- the processing device invokes an initialization device to construct the correspondence.
- the initialization device acquires the movement information provided by the movement sensing device during the movement of the robot and acquires the respective images captured by the imaging device 21.
- the initialization device may acquire the movement information and at least two images for a short period of time during which the robot moves. For example, the initialization device acquires the movement information and at least two images when it is detected that the robot is moving in a straight line. For another example, the initialization device acquires the movement information and at least two images when it is detected that the robot is in a turning movement. Wherein, the interval time for acquiring at least two images during the turning movement may be shorter than the interval for acquiring at least two images when moving in a straight line.
- the initialization device identifies and matches features in the respective images and obtains image locations of the matching features in the respective images.
- Features include, but are not limited to, corner features, edge features, line features, curve features, and the like.
- the initialization device can acquire image locations of matching features in accordance with a tracking device (not shown).
- the tracking device is configured to track positions of at least two images containing the same feature in the before and after time.
- the initialization device then constructs the correspondence according to the image location and the physical spatial location provided by the movement information.
- the initialization device may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
- the initialization device may be a coordinate origin of the physical space coordinate system according to the physical space position of the image at the previous moment, and correspondingly match the coordinate origin with the position of the image in the image coordinate system, thereby Construct the correspondence between the two coordinate systems.
- the working process of the initialization device may be performed based on a user's instruction or transparent to the user.
- the execution process of the initialization device is initiated based on when the corresponding relationship is not stored in the storage device 230, or when the corresponding relationship needs to be updated. There are no restrictions here.
- the correspondence may be saved in the storage device by a program, a database, or the like of the corresponding algorithm.
- software components stored in memory include an operating system, a communication module (or set of instructions), a contact/motion module (or set of instructions), a graphics module (or set of instructions), and an application (or set of instructions).
- the storage device also stores temporary data or persistent data including an image captured by the imaging device and a position and posture obtained by the processing device when performing positioning calculation.
- the processing device After the corresponding relationship is constructed, the processing device acquires matching features in the current time image and the previous time image, and determines the position and posture of the robot according to the correspondence relationship and the feature.
- the processing device 232 can acquire two images of the previous time t1 and the current time t2 according to a preset time interval or an image number interval, and identify and match features in the two images.
- the time interval may be selected between a few milliseconds and a few hundred milliseconds
- the image number interval may be selected between 0 frames and tens of frames.
- Such features include, but are not limited to, shape features, grayscale features, and the like.
- the shape features include, but are not limited to, corner features, line features, edge features, curve features, and the like.
- the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in the image including a preset grayscale range, and the like.
- processing device 232 finds features that can be matched from the identified features based on the locations of the identified features in the respective images. For example, referring to FIG. 2, it is shown as a schematic diagram of the positional change relationship of the matching features in the two images acquired at time t1 and time t2.
- the processing device 232 determines that the image P1 includes the features a1 and a2, the image P2 includes the features b1, b2, and b3, and the features a1 and b1 and b2 belong to the same feature, and the features a2 and b3 Having the same feature, the processing device 232 may first determine that the feature a1 in the image P1 is located on the left side of the feature a2 and the pitch is a d1 pixel point; and also determines that the feature b1 in the image P2 is located on the left side of the feature b3 with a pitch of d1 'Pixel points, and feature b2 is located to the right of feature b3 and has a pitch of d2' pixels.
- the processing device 232 is based on the positional relationship of the features b1 and b3, the positional relationship of the features b2 and b3, and the positional relationship of the features a1 and a2, respectively, and the pixel pitch of the features b1 and b3, and the pixel pitch of the features b2 and b3, respectively, and the feature a1
- the pixel pitch of a2 is matched, so that the feature a1 in the image P1 matches the feature b1 in the image P2, and the feature a2 matches the feature b3.
- the processing device 232 will match the features to facilitate positioning the position and orientation of the robot in accordance with the change in position of the image pixels corresponding to each of the features.
- the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
- the processing device 232 may determine image position offset information of the plurality of features in the two images or determine physical position offset information of the plurality of features in the physical space according to the correspondence relationship, and integrate the obtained A position offset information is used to calculate the relative position and attitude of the robot from time t1 to time t2. For example, by the coordinate transformation, the processing device 232 obtains the position and posture of the robot from the time t1 of the captured image P1 to the time t2 of the captured image P2 as follows: m length is moved on the ground and n degrees are rotated to the left.
- the position and posture obtained by the processing device 232 can help the cleaning robot determine whether it is on the route of navigation.
- the position and posture obtained according to the processing device 232 can help the cleaning robot determine the relative displacement and relative rotation angle, and thereby map the data.
- the processing device 232 is further configured to invoke the positioning and map construction application to: acquire at least one image, determine a position of the flexible winding in the at least one image according to a position of the feature in the at least one image, and A standard metric in at least one of the images determines size information for the flexible wrap.
- the processing device 232 can process, analyze, and understand the image captured by the imaging device 21 by using an image recognition method based on a convolutional neural network, an image recognition method based on a wavelet moment, or the like to identify targets of various modes. Object.
- the processing device can also seek similar image targets by analyzing the correspondence, similarity and consistency of image content, features, structures, relationships, textures, and gradations.
- the objects in the image taken by the camera device generally include, for example, walls, tables, sofas, wardrobes, televisions, power outlets, network cable outlets, and the like.
- the image pickup device 21 supplies the image to the processing device 232 after capturing an image in the navigation operation environment of the cleaning robot, and the processing device 232 recognizes the graphic of the physical object in the captured image by image recognition.
- the graphic of the physical object can be characterized by features such as grayscale of the real object, contour of the physical object, and the like.
- the graphic of the physical object is not limited to the external geometric figure of the physical object, and may include other graphics presented on the physical object, such as a two-hole socket on the power socket, a five-hole socket, a square socket on the network cable socket, and the like.
- the five-hole socket of the power socket and the square socket of the network cable socket can be used to distinguish.
- the object of the cleaning robot's image capturing device in the indoor image can include a power socket or a network cable socket, since the power socket and the network cable socket are designed according to the GB standard, they are not different depending on the environment.
- the standard physical characteristics of the standard parts may include the length, width, and height of the power socket, and the structural relationship of the five-hole socket on the power socket.
- the graphics of the standard and the standard physical features of the standard may be preset and pre-stored using the storage device of the robot.
- the manner in which the standard physical features of the standard are obtained includes reading the preset standard physical features from the storage device of the robot.
- the standard component may include a standard component designed based on at least one of an industry standard, a national standard, an international standard, and a custom standard.
- the standard physical features may include a profile size, a standard structural relationship, etc., for example, standard physical features of the standard component include actual physical length, width, and height of the standard component, actual physical other size data of the corresponding standard in the standard component, and the like. .
- the spacing between the two holes on the power outlet is another example.
- the length and width of the power outlet is another example.
- the processing device 232 analyzes the similarity and consistency by correspondence to image content, features, structures, relationships, textures, and gradations. Determining whether the identified at least one graphic corresponds to the stored pattern of the standard piece, and acquiring the standard physical feature of the standard piece when the identified at least one figure corresponds to the stored pattern of the standard piece.
- the at least one graphic corresponding to the graphic of the stored standard piece is referred to as a standard figure.
- the storage device 230 stores a standard power socket pattern
- the processing device 232 determines the identification by the correspondence, similarity and consistency of the image capacity, features, structure, relationship, texture, and gray scale. Whether at least one of the graphics corresponds to the stored pattern of the power outlet, and when the identified at least one graphic corresponds to the stored pattern of the power outlet, the standard physical characteristics of the power outlet are obtained.
- the processing device 232 can calculate the existence of the image.
- the processing device 232 can also obtain the size of the flexible winding by using the spatial positional relationship of the standard measurement of the socket (such as the length and width of the frame of the socket or the spacing of the jacks in the socket, etc.) (for example, the length and thickness of the flexible winding) ) and the area covered by the flexible winding.
- the standard measurement of the socket such as the length and width of the frame of the socket or the spacing of the jacks in the socket, etc.
- the length and thickness of the flexible winding for example, the length and thickness of the flexible winding
- Processing device 232 invokes a simultaneous location and map construction application and a behavior control application to control the behavior of the cleaning robot.
- the behavior control application refers to controlling cleaning robot navigation, posture adjustment, and the like according to the set information or instructions.
- the processing device 232 can be enabled to control the mobile system 25 of the cleaning robot.
- the mobile system 25 is coupled to the control system 23 for driving the cleaning robot to move based on control commands issued by the control system 23.
- the mobile system 25 is coupled to the processing device 232 in the control system 23 for driving the cleaning robot to move based on control commands issued by the processing device 232.
- the mobile system 25 can include a running mechanism and a driving mechanism, wherein the running mechanism can be disposed at a bottom of the cleaning robot, and the driving mechanism is built in the housing of the cleaning robot.
- the traveling mechanism may adopt a walking wheel mode.
- the traveling mechanism may include, for example, at least two universal walking wheels, and the at least two universal walking wheels realize forward and backward, Steering, and rotation, etc.
- the running mechanism may, for example, comprise a combination of two straight traveling wheels and at least one auxiliary steering wheel, wherein the two straight traveling wheels are not involved in the event that the at least one auxiliary steering wheel is not involved Mainly used for forward and backward, and in the case where the at least one auxiliary steering wheel participates and cooperates with the two straight traveling wheels, movement such as steering and rotation can be achieved.
- the drive mechanism can be, for example, a drive motor with which the travel wheels in the travel mechanism can be driven for movement.
- the driving motor may be, for example, a reversible driving motor, and a shifting mechanism may be further disposed between the driving motor and the axle of the traveling wheel.
- a cleaning system 27 is coupled to the control system, and a cleaning system 27 is coupled to the control system 23 for performing a cleaning operation on the ground based on control commands issued by the control system 23 as the mobile robot moves.
- the cleaning system 27 is coupled to the processing device 232 in the control system 23 for performing a cleaning operation on the ground based on control commands issued by the processing device 232 as the mobile robot moves.
- the cleaning system can include at least a cleaning assembly and a vacuum assembly.
- the cleaning assembly may include a cleaning side brush at the bottom of the housing and a side brush motor for controlling the cleaning side brush, wherein the number of the cleaning side brushes may be at least two, respectively symmetrically disposed on the housing On the opposite sides of the front end, the cleaning side brush can be rotated by a rotary cleaning side brush under the control of the side brush motor.
- the dust collecting assembly may include a dust collecting chamber and a vacuum cleaner, wherein the dust collecting chamber is disposed in a casing, an air outlet of the vacuum cleaner is in communication with the dust collecting chamber, and an air inlet of the vacuum cleaner is disposed in the casing bottom of.
- the cleaning system 27 is not limited thereto. In other embodiments, the cleaning system 27 may further include, for example, a mopping device, a spray device, and the like.
- the processing device 232 when the processing device 232 recognizes the flexible wrap in at least one image captured by the camera device 21, the simultaneous positioning and map construction application and the behavior control application are invoked from the storage device 230 to act on the cleaning robot. Take control.
- the processing device 232 may adopt different behavior control modes for the cleaning robot, wherein the behavior of the cleaning robot may include at least but not limited to: cleaning robot movement and cleaning robot Gesture.
- the manner in which the processing device 232 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the cleaning robot can include issuing control commands to the mobile system 25 to control based on the information of the flexible windings.
- the cleaning robot moves in accordance with the original navigation route and passes over the flexible wrap. Specifically, if the processing device 232 recognizes the flexible winding, combined with information such as its category, size, and/or position, and determines that the flexible winding does not interfere with the normal operation of the cleaning robot, the processing device 232 moves to the mobile device.
- System 25 issues control commands to control the cleaning robot to move along the original navigation path and past the flexible wrap.
- the processing device 232 can control the cleaning robot to move according to the original navigation route and pass the flexible winding.
- the control cleaning robot moves according to the original navigation route to control the cleaning robot to move according to the original navigation route at the original moving speed and the original posture.
- control cleaning robot moves according to the original navigation route to control the cleaning robot to change the original moving speed and moves in the original posture according to the original navigation route.
- changing the original moving speed may include increasing the moving speed and reducing the movement. speed.
- controlling the cleaning robot to move according to the original navigation route to control the cleaning robot to change the moving speed and change the posture and move according to the original navigation route in the original posture, where changing the original moving speed may include increasing the moving speed and reducing Small moving speed.
- the manner in which the processing device 232 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the cleaning robot can include issuing control commands to the mobile system 25 to control based on the information of the flexible windings.
- the cleaning robot modifies the original navigation route and passes over the flexible flexible wrap. Specifically, if the processing device 232 identifies the flexible winding, combined with information such as its category, size, and/or position, it is determined that the flexible winding may interfere with the normal operation of the cleaning robot under the original navigation route but by changing the original In the event that the navigation route can be avoided, the processing device 232 issues control commands to the mobile system 25 to control the cleaning robot to modify the original navigation path movement and past the flexible wrap.
- the placement of the flexible wrap in the image may interfere with the normal operation of the cleaning robot (eg, the cable, the rope, or the wire is placed in the length direction substantially the same as the original navigation route, or the cable, The rope, the thread, the ribbon, etc.
- the cleaning robot eg, the cable, the rope, or the wire is placed in the length direction substantially the same as the original navigation route, or the cable, The rope, the thread, the ribbon, etc.
- the processing device 232 can control the cleaning robot to modify the original navigation route to move over the flexible winding, for example Modifying the original navigation route so that the modified navigation route is perpendicular to the direction in which the flexible winding is placed, so that the cleaning robot can pass the flexible winding, or modify the original navigation route, so that the cleaning robot passes the flexible winding, and the flexible winding
- the object will not be in the new navigation wheel to clean the robot's tube wheel or the position below the suction air inlet.
- the moving speed of the cleaning robot may have different implementation manners, and the moving speed may be kept unchanged, or the moving speed may be increased or decreased. Small moving speed.
- the manner in which the processing device 232 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the cleaning robot can include issuing control commands to the mobile system 25 to control based on the information of the flexible windings.
- the cleaning robot modifies the original navigation route to avoid the flexible wrap. Specifically, if the processing device 232 recognizes the flexible winding, combined with information such as its category, size, and/or position, and determines that the flexible winding is likely to interfere with the normal operation of the cleaning robot, the processing device 232
- the mobile system 25 issues control commands to control the cleaning robot to modify the original navigation route to avoid the flexible wrap.
- the processing device 232 can control the cleaning robot to modify the original navigation path movement to avoid the flexible winding.
- the manner in which the processing device 232 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the cleaning robot can include issuing control commands to the mobile system 25 to control based on the information of the flexible windings.
- the manner in which the processing device 232 invokes the simultaneous positioning and map construction application and the behavior control application to control the behavior of the cleaning robot can include ignoring the flexible wrap based on the information of the flexible wrap, A control command is issued to the mobile system 25 to control the cleaning robot to move in accordance with the original navigation route. Specifically, if the processing device 232 recognizes the flexible winding, combined with information such as its category, size, and/or position, and determines that the flexible winding is not on the original navigation route, the processing device 232 issues control to the mobile system 25. The command controls the cleaning robot to move according to the original navigation route.
- the processing device 232 can control the cleaning robot to move according to the original navigation route, ignoring the flexible winding. .
- the cleaning robot of the present application may further include an alarm device (not shown) coupled to the processing device 232 for issuing an alarm message when the processing device 232 recognizes that a flexible wrap is present in the image. Specifically, if the processing device 232 recognizes that there is a flexible winding in the image, the processing device 232 issues a control command to the alarm device to control the quoting device to issue an alarm message.
- the alarm device and the alarm information it issues may be in various embodiments or a combination thereof.
- the alarm device may be, for example, a honey device that sounds an alarm when the processing device 232 recognizes that there is a flexible wrap in the image.
- the alarm device can be, for example, an alarm light that emits an alarm light when the processing device 232 recognizes that there is a flexible wrap in the image, the alarm light can be constant light or blinking. Light.
- the alarm device may be, for example, an information transmitting device that communicates to a network-connected user terminal (eg, a smartphone) or indoors when the processing device 232 identifies that there is a flexible wrap in the image Intelligent terminals (such as smart speakers, smart light bulbs, smart displays, etc.) send alarm messages. With the alarm device, information discovering the flexible wrap can be immediately issued for subsequent removal of the flexible wrap by the operator to remove the obstacle.
- the cleaning robot of the present application in the working mode of the cleaning robot, acquires an image including the ground by photographing, recognizes the image, and calls the simultaneous positioning and map construction application and behavior when the flexible winding is recognized in the image.
- the application is controlled to control the behavior of the cleaning robot.
- the flexible winding can be effectively detected, and the behavior of the cleaning robot can be controlled according to the detection result.
- FIG. 4 a flow chart of an embodiment of a mobile robot control method according to an embodiment of the present application is shown.
- the control method of the mobile robot of the present application is applied to a mobile robot having an imaging device and a mobile system.
- the control method of the mobile robot of the present application is as follows:
- Step 41 In the working mode of the mobile robot, control the camera to perform shooting to acquire an image including the ground.
- the image pickup device can be used to take an image in the navigation operation environment of the mobile robot.
- the camera device includes, but is not limited to, a camera, a video camera, a camera module integrated with an optical system or a CCD chip, a camera module integrated with an optical system and a CMOS chip, and the like.
- the power supply system of the camera device can be controlled by the power supply system of the mobile robot, and the camera device starts capturing images during the power-on movement of the mobile robot.
- the camera device in the working mode of the mobile robot, can be controlled to take an image to obtain an image including the ground, and the “ground” here can be specifically the ground to which the mobile robot moves to follow the walking path.
- an image pickup device may be used to take an image to acquire an image of the ground located in front of the moving direction of the cleaning robot.
- the image pickup device may be provided on a main body of the robot.
- the camera device may be disposed on a top surface of the mobile robot.
- the camera device in the cleaning robot is disposed on the middle, or edge, of the top surface of its housing.
- the field of view optical axis of the imaging device is ⁇ 30° with respect to the vertical.
- the camera device can be disposed at a junction of a top surface and a side surface of the mobile robot.
- At least one recessed structure is provided at the intersection of the top surface and the side surface of the cleaning robot housing (the recessed structure may be disposed at the front end, the rear end or the side end of the housing), and the imaging device is disposed in the recessed structure .
- a lens optical axis in the camera device and a plane defined by a top surface of the housing (the plane defined by the top surface of the housing may coincide with a horizontal plane, that is, when the mobile robot is smoothly placed in a horizontal plane)
- the angle ⁇ defined by the plane defined by the top surface of the casing is parallel to the horizontal plane, and the angle ⁇ is 61° to 85°.
- the lens in the camera is designed to be tilted forward to capture more environmental information.
- a camera with a forward tilt design can capture more of the environmental image in front of the cleaning robot than a camera with the lens facing up vertically, for example, cleaning a portion of the ground area in front of the robot.
- the navigation operation environment refers to an environment in which a mobile robot moves according to a navigation data constructed using the constructed map data, or moves based on a randomly designed navigation route and performs corresponding operations.
- the navigation operating environment refers to an environment in which the sweeping robot moves according to the navigation route and performs a cleaning operation.
- step S43 at least one image including the ground image captured by the image capturing device is recognized, and the behavior of the mobile robot is controlled when it is recognized that the flexible winding is present in the image.
- step S43 by recognizing at least one image including the ground, and identifying the presence of the flexible wrap in the image, according to the information of the flexible wrap and the position information of the mobile robot, the mobile robot The behavior is controlled accordingly.
- FIG. 5 it is shown as a schematic diagram of the refinement process of FIG.
- step S43 further includes the following refinement steps:
- step S431 at least one image including the ground is acquired from the image captured by the imaging device.
- Step S433 the at least one image containing the ground is identified by a flexible wrap image classifier to obtain a recognition result.
- the identification of the flexible wrap of the at least one image is performed by using a flexible wrap image classifier, that is, when identifying, the image to be recognized is input as an input to the flexible wrap image classifier.
- the recognition result is output.
- the flexible wrap image classifier is trained by a convolutional neural network.
- a corresponding training sample set can be made, that is, a cable training sample set corresponding to the cable can be made (for example, images of various types of cables presented in different types on different grounds), corresponding ropes a set of rope training samples (eg, images of various types of ropes rendered in different types on different grounds), a set of ribbon training samples corresponding to the ribbon (eg, images of various types of ribbons rendered in different patterns on different grounds), A set of cloth training samples corresponding to the cloth head (for example, images of different types of cloth heads on different grounds), and a set of cotton training samples corresponding to the cotton wool (for example, images of various types of cotton wool which are presented in different patterns on different grounds) ), a set of plant vine training samples corresponding to plant vines (for example, images of various types of plant vines presented in different patterns on different grounds), and the like.
- a cable training sample set corresponding to the cable for example, images of various types of cables presented in different types on different grounds
- corresponding ropes eg, images of various types of rope
- identifying the image by using the flexible wrap image classifier may include at least the following steps: performing image preprocessing on the image to be recognized; performing feature extraction on the image preprocessed by the image; The features of the image are input to a flexible wrap image classifier to obtain a recognition result.
- step S435 the information of the flexible winding and the position information of the mobile robot are determined.
- step S435 after identifying the presence of the flexible wrap and the category of the flexible wrap in the image, further comprising determining location information of the mobile robot and other information of the flexible wrap (eg, flexible wrap in current physics) Location within the space and its size information).
- determining the location information of the mobile robot may include: acquiring locations of the matched features in the at least two images at the time before and after, and according to the correspondence between the image coordinate system and the physical space coordinate system and the location of the matching feature Determine the position and posture of the mobile robot.
- the image coordinate system is an image coordinate system constructed based on image pixel points, and the two-dimensional coordinate parameter of each image pixel point in the image captured by the imaging device 13 can be described by the image coordinate system.
- the image coordinate system may be a Cartesian coordinate system or a polar coordinate system or the like.
- the physical space coordinate system and the coordinate system constructed based on each position in the actual two-dimensional or three-dimensional physical space, the physical space position thereof may be described according to the correspondence relationship between the preset image pixel unit and the unit length (or unit angle) In the physical space coordinate system.
- the physical space coordinate system may be a two-dimensional Cartesian coordinate system, a polar coordinate system, a spherical coordinate system, a three-dimensional rectangular coordinate system, or the like.
- the mobile robot may construct based on the position of the matching feature in at least two images of the front and rear time and the movement information acquired from the previous time to the current time. The corresponding relationship.
- the movement information of the mobile robot is acquired during the movement of the mobile robot and the respective images captured by the imaging device are acquired.
- the motion sensing device includes, but is not limited to, a displacement sensor, a gyroscope, a speed sensor, a ranging sensor, a cliff sensor, etc.
- the movable information that can be acquired includes, but is not limited to, displacement information, angle information, and obstacles. Distance information, speed information, direction of travel information, etc.
- the acquired features in the respective images are identified and matched and the image locations of the matching features in the respective images are obtained.
- the mobile robot can acquire two images of the previous time and the current time according to a preset time interval or an image number interval, and identify and match the features in the two images.
- the time interval may be selected between a few milliseconds and a few hundred milliseconds
- the image number interval may be selected between 0 frames and tens of frames.
- Such features include, but are not limited to, shape features, grayscale features, and the like.
- the shape features include, but are not limited to, corner features, line features, edge features, curved features, and the like.
- the grayscale color features include, but are not limited to, a grayscale transition feature, a grayscale value above or below a grayscale threshold, an area size in the image including a preset grayscale range, and the like.
- the correspondence is further constructed based on the location of the matching features in the image and the physical spatial location provided by the movement information.
- the initialization device may establish the correspondence by constructing a feature coordinate parameter of a physical space coordinate system and an image coordinate system.
- the initialization device may be a coordinate origin of the physical space coordinate system according to the physical space position of the image at the previous moment, and correspondingly match the coordinate origin with the position of the image in the image coordinate system, thereby Construct the correspondence between the two coordinate systems.
- the processing device After the corresponding relationship is constructed, the processing device acquires matching features in the current time image and the previous time image, and determines the position and posture of the robot according to the correspondence relationship and the feature.
- the number of matching features is usually plural, for example, more than ten.
- the mobile robot searches for the features that can be matched from the identified features according to the positions of the identified features in the respective images, so that the position of the robot can be located according to the position change of the image pixels corresponding to each of the features. attitude.
- the position of the robot can be obtained according to a displacement change in a two-dimensional plane, which can be obtained according to an angular change in a two-dimensional plane.
- determining the information of the flexible wrap may include: acquiring at least one image, determining a position of the flexible wrap in the at least one image according to a position of the feature in the at least one image, and determining the position according to the at least one piece A standard metric in the image determines the size information of the flexible wrap.
- the objects in the image taken by the camera device generally include, for example, a wall, a table, a sofa, a wardrobe, a television, a power socket, a cable socket, and the like.
- the image pickup apparatus is controlled to take an image in the navigation operation environment of the cleaning robot, and thereafter, the image of the object in the captured image is recognized by image recognition.
- the graphic of the physical object can be characterized by features such as grayscale of the real object, contour of the physical object, and the like.
- the graphic of the physical object is not limited to the external geometric figure of the physical object, and may include other graphics presented on the physical object, such as a two-hole socket on the power socket, a five-hole socket, a square socket on the network cable socket, and the like.
- the five-hole socket of the power socket and the square socket of the network cable socket can be used to distinguish.
- the object of the cleaning robot's image capturing device in the indoor image can include a power socket or a network cable socket, since the power socket and the network cable socket are designed according to the GB standard, they are not different depending on the environment.
- the standard physical characteristics of the standard parts may include the length, width, and height of the power socket, and the structural relationship of the five-hole socket on the power socket.
- the graphics of the standard and the standard physical features of the standard may be preset and pre-stored using the storage device of the robot.
- the manner in which the standard physical features of the standard are obtained includes reading the preset standard physical features from the storage device of the robot.
- the standard component may include a standard component designed based on at least one of an industry standard, a national standard, an international standard, and a custom standard.
- the standard physical features may include a profile size, a standard structural relationship, etc., for example, standard physical features of the standard component include actual physical length, width, and height of the standard component, actual physical other size data of the corresponding standard in the standard component, and the like. .
- the spacing between the two holes on the power outlet is another example.
- the length and width of the power outlet is another example.
- the correspondence between the image content, the feature, the structure, the relationship, the texture, the gray level, the like, the similarity and the consistency are determined.
- the standard physical feature of the standard piece is acquired.
- the at least one graphic corresponding to the graphic of the stored standard piece is referred to as a standard figure.
- a standard power socket graphic is stored. Therefore, at least one recognized image is determined by analyzing the correspondence, similarity and consistency of image tolerance, feature, structure, relationship, texture, and gray scale. Whether or not the graphic corresponding to the stored power outlet corresponds to a standard physical feature of the power outlet when the identified at least one graphic corresponds to the graphic of the stored power outlet.
- the mobile robot can calculate the flexibility existing in the image.
- the position of the wrap in the current physical space and its size information Taking the socket disposed on the wall as an example, when the processing device recognizes the socket and the boundary line between the wall and the bottom surface, or recognizes the socket and the default socket is mounted on the wall, according to the above correspondence, not only the flexible winding can be obtained.
- a position within the current physical space eg, a distance and a declination of the flexible wrap from the socket, a distance and a declination of the mobile robot from the socket, a distance and a declination of the mobile robot from the flexible wrap, etc.
- the size of the flexible wrap for example, the length and thickness of the flexible wrap
- the flexible wrap by using the spatial positional relationship of the standard measure of the socket (for example, the length and width of the frame of the socket or the pitch of the jack in the socket, etc.). Covered area.
- Step 437 Perform corresponding control on the behavior of the mobile robot according to the determined information of the flexible winding and the position information of the mobile robot.
- the behavior of the mobile robot may include at least but not limited to: movement of the mobile robot and posture of the mobile robot.
- correspondingly controlling the behavior of the mobile robot can include controlling the mobile robot to move over the original navigation path and past the flexible wrap based on the information of the flexible wrap. That is, if the flexible winding is identified, combined with information such as its category, size, and/or position, it is determined that the flexible winding does not interfere with the normal operation of the mobile robot, then the mobile robot is controlled to move according to the original navigation route. And over the flexible wrap.
- the mobile robot can be controlled to move according to the original navigation route and pass the flexible winding.
- various embodiments may be employed.
- the mobile robot is controlled to move according to the original navigation route to control the mobile robot to move according to the original navigation route at the original moving speed and the original posture.
- the mobile robot is controlled to move according to the original navigation route to control the mobile robot to change the original moving speed and move according to the original navigation route in the original posture.
- changing the original moving speed may include increasing the moving speed and reducing the movement. speed.
- controlling the mobile robot to move according to the original navigation route to control the mobile robot to change the moving speed and change the posture and move according to the original navigation route in the original posture, where changing the original moving speed may include increasing the moving speed and reducing Small moving speed.
- correspondingly controlling the behavior of the mobile robot can include controlling the mobile robot to modify the original navigation route and past the flexible flexible wrap based on the information of the flexible wrap. Specifically, if the flexible winding is identified, combined with information such as its category, size, and/or position, it is determined that the flexible winding is likely to interfere with the normal operation of the mobile robot under the original navigation route but by changing the original navigation route. In a avoidable situation, the mobile robot is controlled to modify the original navigation route to move over the flexible wrap.
- the placement of the flexible wrap in the image may interfere with the normal operation of the mobile robot (eg, the cable, the rope or the wire head is placed in the length direction substantially the same as the original navigation route, or the cable, The rope, the thread, the ribbon, etc.
- the mobile robot eg, the cable, the rope or the wire head is placed in the length direction substantially the same as the original navigation route, or the cable, The rope, the thread, the ribbon, etc.
- the mobile robot can be controlled to modify the original navigation route and move over the flexible winding, for example, modifying the original
- the navigation route is such that the modified navigation route is perpendicular to the direction in which the flexible winding is placed, so that the mobile robot can move over the flexible winding or modify the original navigation route so that the flexible winding does not move when the mobile robot passes over the flexible winding Move the robot's tube wheel or the suction air inlet below the new navigation wheel.
- the moving speed of the mobile robot may have different implementation manners, and the moving speed may be kept unchanged, or the moving speed may be increased or decreased. Moving speed.
- correspondingly controlling the behavior of the mobile robot can include controlling the mobile robot to modify the original navigation route to avoid the flexible wrap based on the information of the flexible wrap. Specifically, if the flexible winding is identified, combined with information such as its category, size, and/or position, it is judged that the flexible winding is likely to interfere with the normal operation of the mobile robot, and then the mobile robot is modified to modify the original navigation. Route to avoid the flexible wrap.
- the mobile robot can be controlled to modify the original navigation route movement to avoid the flexible winding.
- correspondingly controlling the behavior of the mobile robot can include controlling the mobile robot to stop moving based on the information of the flexible wrap. Specifically, if the flexible winding is identified, combined with information such as its category, size, and/or position, it is judged that the flexible winding is likely to interfere with the normal operation of the mobile robot or the flexible winding cannot be effectively judged. In the case where the object interferes with the normal operation of the mobile robot, the mobile robot is controlled to stop moving.
- controlling the behavior of the mobile robot accordingly may include controlling the mobile robot to move in accordance with the original navigation route based on the information of the flexible wrap, ignoring the flexible wrap. Specifically, if the flexible winding is identified, combined with information such as the category, size, and/or position, and the flexible winding is judged not to be on the original navigation route, the mobile robot is controlled to move according to the original navigation route.
- the mobile robot can be controlled to move according to the original navigation route, ignoring the flexible winding.
- it may also include controlling the mobile robot to issue an alarm message. That is, if a flexible wrap is identified, information identifying the flexible wrap can be immediately issued for subsequent removal of the flexible wrap by the operator to remove the obstruction.
- control method of the mobile robot of the present application can control the imaging device to perform imaging to acquire an image including the ground, and recognize at least one image including the ground captured, and when it is recognized that there is a flexible winding in the image, The behavior of the mobile robot is controlled.
- the flexible winding can be effectively detected, and the behavior of the mobile robot can be controlled accordingly according to the detection result.
- the present application also provides a storage medium for an electronic device, the storage medium storing one or more programs, when the one or more computer programs are executed by one or more processors, The one or more processors implement the control method of any of the foregoing.
- portions of the technical solution of the present application that contribute in essence or to the prior art may be embodied in the form of a software product, which may include one or more of the executable instructions for storing the machine thereon.
- a machine-readable medium that, when executed by one or more machines, such as a computer, computer network, or other electronic device, can cause the one or more machines to perform operations in accordance with embodiments of the present application. For example, each step in the control method of the mobile robot is executed.
- the machine-readable medium can include, but is not limited to, a floppy disk, an optical disk, a CD-ROM (Compact Disk-Read Only Memory), a magneto-optical disk, a ROM (Read Only Memory), a RAM (Random Access Memory), an EPROM (erasable) In addition to programmable read only memory, EEPROM (Electrically Erasable Programmable Read Only Memory), magnetic or optical cards, flash memory, or other types of media/machine readable media suitable for storing machine executable instructions.
- the storage medium may be located in a mobile robot or in a third-party server, such as in a server that provides an application store. There are no restrictions on the specific application mall, such as Huawei Application Mall, Apple App Store, etc.
- This application can be used in a variety of general purpose or special purpose computing system environments or configurations.
- the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
- program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
- program modules can be located in both local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
一种移动机器人及其控制方法和控制系统(23),其中,移动机器人包括:存储装置(11,230),存储有同时定位与地图构建应用及行为控制应用;摄像装置(13,21),用于获取操作环境的图像;处理装置(15,232),用于控制摄像装置(13,21)进行拍摄以获取包含地面的图像,并在识别出图像中存在柔性缠绕物时自存储装置(11,230)中调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制;移动系统(17,25),用于基于处理装置(15,232)发出的控制指令而驱动移动机器人移动。实现针对柔性缠绕物进行有效地检测,并可根据检测结果对清洁机器人的行为进行相应的控制。
Description
本申请涉及移动机器人技术领域,特别是涉及一种移动机器人、移动机器人的控制方法和控制系统。
移动机器人是自动执行特定工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业、商业或家庭,可用于取代保安巡视、取代迎宾员或点餐员、或取代人们清洁地面,还可用于家庭陪伴、辅助办公等。
诸如清洁机器人、陪护机器人、迎宾机器人等移动机器人在工作模式下移动时,由于工作场景环境的复杂性,移动机器人在工作过程中由于避障性能不佳,经常磕碰障碍物,损坏家具和自身,影响移动机器人的正常工作。特别是针对一些柔性缠绕物(例如线缆、绳索、丝带、布头等),它们往往会缠住移动机器人的轮子而使得移动机器人无法移动并在严重时可能会使得移动机器人倒地而引发安全事故。对于清洁机器人而言,柔性缠绕物更可能缠绕住清洁机器人的清洁系统,例如缠绕住清洁刷、缠绕或堵塞住吸尘部件等。
一般地,为能及时检测地面的障碍物,在现有的移动机器人上通常采用的障碍物检测技术主要有:
机械碰撞检测:在移动机器人的底部前方安装了一块连接电子开关的机械挡板,当碰到障碍物时电子开关从断开状态转换成连通状态,从而检测到前方的障碍物。采用此种检测方式,必须要碰撞才能检测到,用户体验较差,而且,对于柔性缠绕物,易被移动机器人推移,因撞击力度太小而无法触发电子开关,导致漏检。
红外测距检测:在移动机器人上安装了一个或多个红外测距传感器,当检测的距离小于设定的阈值之后就判断前方存在障碍物。红外线检测受环境光照影响很大,也存在近距离盲区较大的问题。玻璃、吸光或全黑材质的障碍物容易被漏检,精度一致性差。受壳体结构的影响,红外线传感器不能安装得太低,从而会漏检一些高度比红外线更低的低矮障碍物,例如柔性缠绕物。
超声波测距检测:在移动机器人上安装了一个或多个超声波测距传感器,当检测的距离小于设定的阈值之后就判断前方存在障碍物。一方面,超声波容易受环境温度、反射物材质、声波多径传播等因素的影响。另一方面,受壳体结构的限制,超声波传感器无法安装得太低,从而会漏检一些高度比超声波更低的低矮障碍物,例如柔性缠绕物。
由此可见,针对柔性缠绕物的检测,是现有移动机器人亟待需要解决的一个问题。
发明内容
本申请的目的在于公开一种移动机器人及其控制方法和控制系统,用于改善移动机器人对柔性缠绕物的检测精度。
为实现上述目的及其他目的,本申请在第一方面公开一种移动机器人的控制方法,所述移动机器人具有摄像装置,所述控制方法包括以下步骤:在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像;对所述摄像装置所拍摄的至少一幅包含地面的图像进行识别,在识别出所述图像中存在柔性缠绕物时对所述移动机器人的行为进行控制。
本申请在第二方面公开一种移动机器人,包括:存储装置,存储有同时定位与地图构建应用及行为控制应用;摄像装置,用于在移动机器人的工作模式下获取操作环境的图像;处理装置,与所述存储装置和所述摄像装置相连,用于在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制;移动系统,与所述处理装置相连,用于基于所述处理装置所发出的控制指令而驱动移动机器人移动。
本申请在第三方面公开一种移动机器人的控制系统,所述移动机器人配置有摄像装置,所述控制系统包括:存储装置,存储有同时定位与地图构建应用及行为控制应用;处理装置,与所述存储装置和所述摄像装置相连,用于在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制。
本申请在第四方面公开一种清洁机器人,包括:摄像装置;如前所述的控制系统;移动系统,与所述控制系统相连,用于基于所述控制系统所发出的控制指令而驱动移动机器人移动;清洁系统,与所述控制系统相连,用以在所述移动机器人移动时对地面执行清洁作业。
本申请在第五方面公开一种计算机可读存储介质,存储有至少一个程序,所述程序被处理器执行时,实现如前所述移动机器人的控制方法中的各个步骤。
本申请公开的移动机器人及其控制方法和控制系统,可控制摄像装置进行拍摄以获取包含地面的图像,对所拍摄的至少一幅包含地面的图像进行识别,在识别出图像中存在柔性缠绕物时对移动机器人的行为进行控制。利用本申请,可针对柔性缠绕物进行有效地检测,并可根据检测结果对移动机器人的行为进行相应的控制。
图1显示为本申请移动机器人在一实施例中的结构示意图。
图2显示为前后时刻所获取的两幅图像中相匹配特征的位置变化关系示意图。
图3显示为本申请清洁机器人在一实施例中的结构示意图。
图4显示为本申请移动机器人的控制方法在一实施例中的流程示意图。
图5显示为图4的细化流程示意图。
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本申请的精神和范围的情况下进行机械组成、结构、电气以及操作上的改变.下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求书所限定。
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一预设阈值可以被称作第二预设阈值,并且类似地,第二预设阈值可以被称作第一预设阈值,而不脱离各种所描述的实施例的范围。第一预设阈值和预设阈值均是在描述一个阈值,但是除非上下文以其他方式明确指出,否则它们不是同一个预设阈值。相似的情况还包括第一音量与第二音量。
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示.应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加.此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合.因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
本申请涉及移动机器人领域,移动机器人是自动执行特定工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业、商业或家庭,可用于取代保安巡视、取代迎宾员或点餐员、或取代人们清洁地面,还可用于家庭陪伴、辅助办公等。以最为常见的清洁机器人为例,清洁机器人,又名自动扫地机、智能吸尘器等,是智能家用电器的一种,能完成清扫、吸尘、擦地等清洁工作。具体地,清洁机器人可受人控制(操作人员手持遥控器)或按照一定的设定规则自行在房间内完成地面清洁工作。
由于工作场景环境的复杂性,移动机器人在工作模式下移动时很可能会遭遇到各类的障碍物,因此,需要及时检测到障碍物并以作出相应的行为调整是移动机器人的一项必要技能。然后,现有 技术中的移动机器人在工作过程中由于对于某些障碍物的检测仍存在缺失,特别是针对一些柔性缠绕物的检测存在盲点。以清洁机器人为例,例如,清洁机器人在室内进行清洁作业时,通过常规的红外测距检测或超声波测距检测无法轻易地检测到地面上的柔性缠绕物(例如线缆、绳索、丝带等),若清洁机器人不作相应的行为控制,那么这些柔性缠绕物不仅可能会缠住清洁机器人的轮子而使得移动机器人无法移动并在严重时可能会使得清洁机器人倒地而引发安全事故还可能缠绕住清洁机器人的清洁系统,例如缠绕住清洁刷、缠绕或堵塞住吸尘部件等使得清洁机器人无法进行清洁工作。
基于上述清洁机器人的示例而推及至其他应用场景下所使用的移动机器人,为了提高移动机器人针对柔性缠绕物的检测及相应的行为控制,本申请提供一种移动机器人的控制系统。请参见图1,显示为本申请移动机器人在一实施例中的结构示意图。如图1所示,所述移动机器人包含存储装置11、摄像装置13、处理装置15、以及移动系统17。
存储装置11存储有同时定位与地图构建应用及行为控制应用。
所述同时定位与地图构建应用即SLAM(Simultaneous Localization And Mapping)应用,SLAM应用是智能机器人领域中的基础应用。移动机器人的定位技术可以包括允许移动机器人相对于其周边确定其位置和导向(或“姿态”)的过程,可以构件其周边地图的移动机器人可以在地图中定位其本身,以展示自主程度。问题可以描述为当移动机器人在未知环境中时,是否有办法让移动机器人一边逐步描绘出此环境完全的地图,同时一边决定移动机器人应该往哪个方向行进,也就是说,要实现智能化需要完成三个任务,第一个是定位(Localization),第二个是建图(Mapping),第三个则是随后的路径规划(Navigation)。本申请中的行为控制应用是指根据所设定的信息或指令控制移动机器人导航、进行导向(或“姿态”)调整等。“姿态”在本文中包括移动机器人在移动空间内的位置(例如,x、y坐标地点),以及相对于例如在移动空间内的基本物(例如墙壁)或基本方向的所述移动机器人的角度导向。需说明的是,为了补偿基于SLAM技术所构建的地图的误差,一种基于视觉同时定位与地图构建的技术(Visual Simultaneous Localization and Mapping,简称VSLAM)可利用基于图像传感器中的图像数据对传感器所提供的移动信息的误差进行补偿,可为清洁机器人提供更精准的导航能力。
所述行为控制应用是智能机器人领域中的基础应用,其与处理装置15和移动系统17相关联。利用所述行为控制应用,可使得处理装置15能对移动系统17进行控制。在实际应用中,所述行为控制应用可与前述的SLAM应用相结合,那么,处理装置15可根据SLAM应用所获得定位信息和地图信息来向移动系统17发出控制指令以令移动系统执行相应的行为。“行为”在本文中包括移动机器人的移动和姿态。
此外,存储装置11还预存有至少一个标准件的标准物理特征。其中,所述标准件可包括基于行业标准、国家标准、国际标准、和自定义标准中的至少一种标准而设计的标准件。例如,行业标 准如机械行业标准JB、建筑材料行业标准JC等;国家标准如中国GB标准、德国DIN标准、英国BS标准等;国际标准如国际ISO标准;自定义标准稍后详述。所述标准物理特征可包括轮廓尺寸、标准结构关系等,例如,标准件的标准物理特征包括标准件实际物理上的长、宽、高,标准件中对应标准的实际物理上的其他尺寸数据等。例如,电源插座上两孔间距。又如电源插座的长宽值。再如底板的长宽值或地砖的长宽值。还比如地毯的长宽值及厚度。
在此,存储装置11包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM),例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置11还可以包括远离一个或多个处理器的存储器,例如,经由RF电路或外部端口以及通信网络(未示出)访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制移动机器人的诸如中央处理器(CPU)和外设接口之类的其他组件对存储装置的访问。
摄像装置13用于在移动机器人的工作模式下获取操作环境的图像。摄像装置13包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。摄像装置13的供电系统可受移动机器人的供电系统控制,当移动机器人上电移动期间,摄像装置13即开始拍摄图像,并提供给处理装置15。例如,清洁机器人中的摄像装置将所拍摄的室内图像以预设视频格式缓存在存储装置11中,并由处理装置15获取。摄像装置13用于在移动机器人移动期间拍摄图像。在此,在某些实施方式中,摄像装置13可设置于移动机器人的顶面。例如,清洁机器人中的摄像装置设置于其壳体的顶面的中部、或边缘上。摄像装置的视野光学轴相对于垂线为±30°。例如,清洁机器人的摄像装置的光学轴相对于垂线的夹角为-30°、-29°、-28°、-27°、……、-1°、0°、1°、2°、……、29°、或30°。在某些实施方式中,摄像装置13可设置于移动机器人的顶面与侧面的交接处。例如,在清洁机器人壳体的顶面与侧面的交接处设置至少一个凹陷结构(所述凹陷结构可设于壳体的前端、后端或者侧端),将摄像装置设置于所述凹陷结构内。摄像装置中的镜头光学轴与所述壳体的顶部表面所定义的平面(所述壳体的顶部表面所定义的平面可与水平面相一致,即当将所述移动机器人平稳放置于一水平面时,所述壳体的顶部表面所定义的平面与所述水平面相平行)的夹角α为61°至85°,即摄像装置中的镜头光学轴与所述壳体的顶部表面所定义的平面的夹角α为61°、62°、63°、64°、65°、66°、67°、68°、69°、70°、71°、72°、73°、74°、75°、76°、77°、78°、79°、80°、81°、82°、83°、84°、85°。摄像装置中的镜头为前倾设计,可捕捉到更多的环境信息。例如,前倾设计的摄像装置相比于镜头竖直朝上的摄像装置能更多地捕捉到清洁机器人前方的环境图像,比如,清洁机器人前方的部分地面区域。需要说明的是,本领域技术人员应该理解,上述光学轴与垂线或壳体的顶部表面的夹角的取值为整数但并非限制其夹角精 度为1°的范围内,根据实际移动机器人的设计需求,所述夹角的精度可更高,如达到0.1°、0.01°以上等,在此不做无穷尽的举例。
处理装置15包括一个或多个处理器。处理装置15可操作地与存储装置11中的只读存储器、随机存取存储器和/或非易失性存储器耦接。处理装置15可执行在只读存储器、随机存取存储器和/或非易失性存储器中存储的指令以在机器人中执行操作,诸如提取图像中的特征并基于特征在地图中进行定位、或者获取图像并对图像进行识别等。如此,处理器可包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。处理装置15还与I/O端口和输入结构可操作地耦接,该I/O端口可使得移动机器人能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。所述其他电子设备可以是所述移动机器人中移动装置中的移动电机,或移动机器人中专用于控制移动装置和清扫装置的从处理器,如微控制单元(Microcontroller Unit,简称MCU)。
在一种示例中,处理装置15通过数据线分别连接存储装置11和摄像装置13。处理装置15通过数据读写技术与存储装置11进行交互,处理装置15通过接口协议与摄像装置13进行交互。其中,所述数据读写技术包括但不限于:高速/低速数据接口协议、数据库读写操作等。所述接口协议包括但不限于:HDMI接口协议、串行接口协议等。
处理装置15用于在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制。
处理装置15用以从摄像装置13所拍摄的图像中获取至少一幅图像,并对所述至少一幅图像进行识别以检测所述至少一幅图像中是否存在柔性缠绕物。
如前所述,一般地,移动机器人在工作模式下在工作面(例如地面)上移动时,对地面上的柔性缠绕物的检测存在缺失。因此,在本申请中,在移动机器人的工作模式下,可控制摄像装置13进行拍摄以获取包含地面的图像,这里的“地面”更可具体为移动机器人对照行走路径而在后续会移动到的地面。以清洁机器人为例,在某些实施例中,可利用摄像装置13进行拍摄以获取位于清洁机器人移动方向前方的地面的图像。
对所述至少一幅图像进行柔性缠绕物的识别是利用一柔性缠绕物图像分类器实现的,即,在识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中即可输出识别结果。在本实施例中,所述柔性缠绕物图像分类器是经卷积神经网络训练得到的。卷积神经网络(Convolutional Neural Network,CNN)是深度神经网络的一种体系结构,其与图像处理有着密切的关系。卷积神经 网络的权值共享网络结构使之更类似于生物神经网络,这样的结构不但降低了网络模型的复杂度,而且减少了权值的数量,这种网络结构对平移、比例缩放、倾斜或者其他形式的变形具有高度不变性。卷积神经网络可以将图像直接作为网络的输入,避免了传统识别算法中复杂的特征提取和数据重建过程。基于这些优点,使它在图像识别中有着得天独厚的优势。
所述柔性缠绕物图像分类器是经卷积神经网络训练得到的。
在某些实施例中,所述训练可包括:首先,制作训练样本集,采集含有符合预设规则的柔性缠绕物的图像作为训练样本。之后,根据制作的所述训练样本集进行训练,得到柔性缠绕物图像分类器。其中,在制作训练样本集时,在一种实施方式中,可自行采集符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的柔性缠绕物的图像或自行拍摄相关的柔性缠绕物的图像,并从中选取符合预设规则的典型的柔性缠绕物的图像并将其作为训练样本。而在其他实施方式中,也可从已有的各类柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像作为训练样本,例如,分别从不同的柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像,将它们组合后形成训练样本集,或者,从不同的柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为训练样本集。在这里,作为训练样本的包含柔性缠绕物的图像可以是背景单一(例如,背景为单一纯色)的简单图像也可以是在实物背景下的图像。由于在本申请中,移动机器人控制摄像装置13进行拍摄所获取的是包含地面的图像,因此,作为训练样本的图像可以是包含有柔性缠绕物的地面图像。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的训练样本集,即,可制作对应线缆的线缆训练样本集(例如,各类线缆在不同地面上以不同型态呈现的图像)、对应绳索的绳索训练样本集(例如,各类绳索在不同地面上以不同型态呈现的图像)、对应丝带的丝带训练样本集(例如,各类丝带在不同地面上以不同型态呈现的图像)、对应布头的布头训练样本集(例如,各类布头在不同地面上以不同型态呈现的图像)、对应棉絮的棉絮训练样本集(例如,各类棉絮在不同地面上以不同型态呈现的图像)、对应植物藤蔓的植物藤蔓训练样本集(例如,各类植物藤蔓在不同地面上以不同型态呈现的图像)等。另外,可补充说明的是,在将制作的训练样本集进行训练之前,还可对训练样本集中的图像进行相应的图像预处理。在某些实施例中,所述图像预处理包括但不限于:将训练样本集中的图像进行裁剪、压缩、灰度处理、图像滤波和/或噪声过滤处理等。
在某些实施例中,所述训练可包括:首先,制作训练样本集,采集含有符合预设规则的柔性缠绕物的图像作为正样本以及采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像作为负样本。之后,根据制作的训练样本集进行训练,得到柔性缠绕物图像分类器。其中,针对采集含 有符合预设规则的柔性缠绕物的图像作为正样本,在一种实施方式中,可自行采集符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的柔性缠绕物的图像或自行拍摄相关的柔性缠绕物的图像,并从中选取符合预设规则的典型的柔性缠绕物的图像并将其作为正样本。而在其他实施方式中,也可从已有的各类柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像作为正样本,例如,分别从不同的柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像,将它们组合后形成正样本集,或者,从不同的柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为正样本。在这里,作为正样本的包含柔性缠绕物的图像可以是背景单一(例如,背景为单一纯色)的简单图像也可以是在实物背景下的图像。由于在本申请中,移动机器人控制摄像装置13进行拍摄所获取的是包含地面的图像,因此,作为正样本的图像可以是包含有柔性缠绕物的地面图像。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的正样本,即,可制作对应线缆的线缆正样本集(例如,各类线缆在不同地面上以不同型态呈现的图像)、对应绳索的绳索正样本集(例如,各类绳索在不同地面上以不同型态呈现的图像)、对应丝带的丝带正样本集(例如,各类丝带在不同地面上以不同型态呈现的图像)、对应布头的布头正样本集(例如,各类布头在不同地面上以不同型态呈现的图像)、对应棉絮的棉絮正样本集(例如,各类棉絮在不同地面上以不同型态呈现的图像)、对应植物藤蔓的植物藤蔓正样本集(例如,各类植物藤蔓在不同地面上以不同型态呈现的图像)等。针对采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像作为负样本,在一种实施方式中,可自行采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像或自行拍摄不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像,并从中选取不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像并将其作为负样本。而在其他实施方式中,也可从已有的各类不含有柔性缠绕物的标准库中选取部分或全部的图像作为负样本,例如,分别从不同的不含有柔性缠绕物的标准库中选取部分或全部的图像,将它们组合后形成负样本集,或者,从不同的不含有柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为负样本。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的负样本集,即,可制作对应线缆的线缆负样本集(例如,不含有线缆或含有不符合预设规则的线缆在不同地面上的图像)、对应绳索的绳 索负样本集(例如,不含有绳索或含有不符合预设规则的绳索在不同地面上的图像)、对应丝带的丝带负样本集(例如,不含有丝带或含有不符合预设规则的丝带在不同地面上的图像)、对应布头的布头负样本集(例如,不含有布头或含有不符合预设规则的布头在不同地面上的图像)、对应棉絮的棉絮负样本集(例如,不含有棉絮或含有不符合预设规则的棉絮在不同地面上的图像)、对应植物藤蔓的植物藤蔓负样本集(例如,不含有植物藤蔓或含有不符合预设规则的植物藤蔓在不同地面上的图像)等。另外,可补充说明的是,在将制作的训练样本集进行训练之前,还可对训练样本集中的图像进行相应的图像预处理。在某些实施例中,所述图像预处理包括但不限于:将训练样本集中的图像进行截取、压缩、灰度处理、图像滤波和/或噪声过滤处理等。
后续,即可利用训练得到的柔性缠绕物图像分类器对图像进行识别。在本申请中,在进行图像识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中,即可由柔性缠绕物图像分类器输出相应的识别结果。在某些实施例中,利用柔性缠绕物图像分类器对图像进行识别可至少包括如下步骤:对待识别的图像进行图像预处理;对图像预处理后的所述图像进行特征提取;将待识别的图像的特征输入到柔性缠绕物图像分类器,得到识别结果。
其中,对待识别的图像进行图像预处理包括但不限于:对待识别的图像进行裁剪、压缩、灰度处理、阈值化处理等,当然,所述预处理还可包括图像滤波、噪声滤波处理等。以灰度处理和阈、值化处理为例,对待识别的图像进行灰度处理以得到灰度图像,对灰度处理后的灰度图像进行阈值化处理(例如,所述灰度图像经二值化处理后可变成能反映图像整体和局部特征的二值化图像,即黑白图像)。对图像预处理后的所述图像进行特征提取包括但不限于:提取待识别的图像的轮廓特征、纹理特征等。
值得注意的是:在某些实施例中,前述用于进行柔性缠绕物识别的柔性缠绕物图像分类器可预存于存储装置11中。在一种实现方式中,在移动机器人在出售给终端用户之前(例如,在移动机器人被制造出厂之前,或,移动机器人被下发到各个销售点之前,或移动机器人在销售点被贩售给终端用户之前),柔性缠绕物图像分类器被写入到存储装置11中,一般地,可对柔性缠绕物图像分类器设置权限,禁止终端用户对其进行改动。当然,并不以此为限,例如,所述柔性缠绕物图像分类器也可开放部分权限或全部权限,可允许终端用户对其进行改动(例如,修改或增减操作等)。或者,所述柔性缠绕物图像分类器也可在移动机器人联网并与相应的厂商服务器或应用服务商服务器建立通信连接后进行更新操作。在其他实施方式中,所述柔性缠绕物图像可存储于与移动机器人远程通信的云端系统中,如此,在进行图像识别时,处理装置15可从摄像装置13所拍摄的图像中获取至少一幅图像并将所述至少一幅图像发送至与移动机器人远程通信的云端系统中,由云端系统中的柔性缠绕物图像分类器对所述至少一幅图像进行识别并将识别结果再远程发送给移动机器人。
因此,利用处理装置15可从摄像装置13所拍摄的图像中获取至少一幅图像并利用柔性缠绕物 图像分类器对所述至少一幅图像进行识别,由此可检测出所述至少一幅图像中是否存在柔性缠绕物且获得所存在的柔性缠绕物的具体类别。
处理装置15还可用于在识别出图像中存在柔性缠绕物时自存储装置11中调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制。
处理装置15用于调用所述定位与地图构建应用以执行:获取前、后时刻的至少两幅图像中相匹配特征的位置,并依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定移动机器人的位置及姿态。
在本申请中,存储装置11还存储有图像坐标系与物理空间坐标系的对应关系。其中,所述图像坐标系是基于图像像素点而构建的图像坐标系,摄像装置13所拍摄的图像中各个图像像素点的二维坐标参数可由所述图像坐标系描述。所述图像坐标系可为直角坐标系或极坐标系等。所述物理空间坐标系及基于实际二维或三维物理空间中各位置而构建的坐标系,其物理空间位置可依据预设的图像像素单位与单位长度(或单位角度)的对应关系而被描述在所述物理空间坐标系中。所述物理空间坐标系可为二维直角坐标系、极坐标系、球坐标系、三维直角坐标系等。
对于所使用场景的地面复杂度不高的移动机器人来说,该对应关系可在出厂前预存在所述存储装置中。然而,对于使用场景的地面复杂度较高的移动机器人,例如清洁机器人来说,可利用在所使用的场地进行现场测试的方式得到所述对应关系并保存在存储装置11中。在某些实施方式中,所述移动机器人还包括移动传感装置(未予图示),用于获取机器人的移动信息。其中,所述移动传感装置包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等。在机器人移动期间,移动传感装置不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在机器人的体侧。例如,清洁机器人中的测距传感器被设置在壳体的边缘;清洁机器人中的悬崖传感器被设置在机器人底部。根据机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
为了构建所述对应关系,在某些实施方式中,所述移动机器人还包括初始化装置(未予图示),所述初始化装置可基于前后时刻的至少两幅图像中相匹配特征的位置和自所述前一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。在此,所述初始化装置可以是一种程序模块,其程序部分存储在存储装置中,并经由处理装置的调用而被执行。当所述存储装置中未存储所述对应关系时,所述处理装置调用初始化装置以构建所述对应关系。
在此,初始化装置在机器人移动期间获取所述移动传感装置所提供的移动信息以及获取摄像装置13所拍摄的各个图像。为了减少移动传感装置的累积误差,所述初始化装置可在机器人移动的一小段时间内获取所述移动信息和至少两幅图像。例如,所述初始化装置在监测到机器人处于直线移 动时,获取所述移动信息和至少两幅图像。又如,所述初始化装置在监测到机器人处于转弯移动时,获取所述移动信息和至少两幅图像。其中,在转弯移动时获取至少两幅图像的间隔时间可比在直线移动时获取至少两幅图像的间隔时间要短。
接着,所述初始化装置对各个图像中的特征进行识别和匹配并得到相匹配特征在各个图像中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述初始化装置可依据一跟踪装置(未予图示)来获取相匹配特征的图像位置。所述跟踪装置用于跟踪前后时刻至少两幅图像中包含相同特征的位置。
所述初始化装置再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述初始化装置可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述初始化装置可依据所拍摄前一时刻图像所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
需要说明的是,所述初始化装置的工作过程可以基于用户的指令来执行,或对用户透明。例如,所述初始化装置的执行过程是基于存储装置11中未存储所述对应关系、或所述对应关系需要被更新时而启动的。在此不做限制。
所述对应关系可由对应算法的程序、数据库等方式保存在所述存储装置中。为此,存储在存储器中的软件组件包括操作系统、通信模块(或指令集)、接触/运动模块(或指令集)、图形模块(或指令集)、以及应用(或指令集)。此外,存储装置还保存有包含摄像装置所拍摄的图像、处理装置在进行定位运算时所得到的位置及姿态在内的临时数据或持久数据。
在构建了所述对应关系后,所述处理装置获取当前时刻图像和前一时刻图像中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态。
在此,处理装置15可按照预设的时间间隔或图像数量间隔获取前一时刻t1和当前时刻t2的两幅图像,识别并匹配两幅图像中的特征。其中,根据所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,处理装置15根据所识别的特征在各自图像中位置,从所识别出的特征中寻找能够匹配的特征。例如,请参阅图2,显示为在t1时刻和t2时刻所获取的两幅图像中相匹配特征的位置变化关系示意图。处理装置15在识别出各个图像中的特征后,确定图像P1中包含特征a1和a2,图像P2中包含特征b1、b2和 b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,处理装置15可先确定在图像P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置15根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到图像P1中特征a1与图像P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置15将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在此,处理装置15可以根据所述对应关系,确定两幅图像中多个特征的图像位置偏移信息、或确定多个特征在物理空间中的物理位置偏移信息,并综合所得到的任一种位置偏移信息来计算机器人自t1时刻至t2时刻的相对位置及姿态。例如,通过坐标变换,处理装置15得到机器人从拍摄图像P1时刻t1至拍摄图像P2时刻t2的位置和姿态为:在地面上移动了m长度以及向左旋转了n度角。以清洁机器人为例,当清洁机器人已建立地图时,依据处理装置15得到的位置及姿态可帮助清洁机器人确定是否在导航的路线上。当清洁机器人未建立地图时,依据处理装置15得到的位置及姿态可帮助清洁机器人确定相对位移和相对转角,并借此数据进行地图绘制。
处理装置15还用于调用所述定位与地图构建应用以执行:获取至少一幅图像,依据所述至少一幅图像中特征的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
处理装置15可采用基于卷积神经网络的图像识别方法、基于小波矩的图像识别方法等图像识别方法对摄像装置13所拍摄的图像进行处理、分析和理解,以识别各种不同模式的目标和对象。此外,所述处理装置还可通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来寻求相似图像目标。
在一实施例中,以清洁机器人为例,由于清洁机器人通常进行室内清洁工作,因而通过摄像装置所拍摄的图像中的实物一般会包括例如墙、桌、沙发、衣柜、电视机、电源插座、网线插座等。在本示例中,首先,摄像装置13在清洁机器人的导航操作环境下拍摄图像之后将所述图像提供给处理装置15,处理装置15通过图像识别来识别所拍摄的图像中实物的图形。其中,所述实物的图形可以由实物的灰度、实物的轮廓等特征表征。同时,所述实物的图形并不限于实物的外部几何图形,还可包括实物上呈现出的其他图形,例如电源插座上的二孔插口、五孔插口,网线插座上的方形插口等。鉴于此,例如,对于外部几何图形相近的电源插座和网线插座,则可利用电源插座的五孔插口与网线插座的方形插口来辨别。此外,在清洁机器人的摄像装置在室内可拍摄的图像中的实 物包括电源插座、网线插座的情况下,由于电源插座、网线插座是根据GB标准设计的,因而不会因其所处的环境不同而有所变化,因此,可以作为标准件。标准件的标准物理特征可包括电源插座的长、宽、高,电源插座上五孔插口的结构关系等。在某些实施方式中,标准件的图形和标准件的标准物理特征可以是预设的,并且利用机器人的存储装置预先存储。因此,获取标准件的标准物理特征的方式包括自机器人的存储装置中读取预设的标准物理特征。其中,所述标准件可包括基于行业标准、国家标准、国际标准、和自定义标准中的至少一种标准而设计的标准件。例如,行业标准如机械行业标准JB、建筑材料行业标准JC等;国家标准如中国GB标准、德国DIN标准、英国BS标准等;国际标准如国际ISO标准;自定义标准稍后详述。所述标准物理特征可包括轮廓尺寸、标准结构关系等,例如,标准件的标准物理特征包括标准件实际物理上的长、宽、高,标准件中对应标准的实际物理上的其他尺寸数据等。例如,电源插座上两孔间距。又如电源插座的长宽值。再如底板的长宽值或地砖的长宽值。还比如地毯的长宽值及厚度。
此外,针对所识别的图像中实物的图形和所存储的标准件的图形,处理装置15通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的标准件的图形是否对应,当所识别的至少一个图形与所存储的标准件的图形对应时,获取标准件的标准物理特征。其中,与所存储的标准件的图形对应的所述至少一个图形被称为标准图形。以电源插座为例,存储装置11存储有标准电源插座图形,处理装置15通过对图像容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的电源插座的图形是否对应,当所识别的至少一个图形与所存储的电源插座的图形对应时,获取电源插座的标准物理特征。
因此,基于预设的单位像素间隔与实际物理空间中单位长度的对应关系以及所识别的标准图形的尺寸和所对应的标准物理特征中的实物尺寸,处理装置15即可计算出图像中存在的柔性缠绕物在当前物理空间内的位置及其尺寸信息。以设置在墙上的插座为例,当处理装置识别出插座以及墙与底面的交界线、或识别出插座并默认插座被安装在墙上时,按照上述对应关系,处理装置15不仅可得到柔性缠绕物在当前物理空间内的位置(例如,所述柔性缠绕物与插座的距离和偏角,所述移动机器人相距插座的距离和偏角,所述移动机器人相距柔性缠绕物的距离和偏角等),处理装置15还可以利用插座的标准量度(例如插座的边框的长宽值或插座中插孔的间距等)的空间位置关系得到柔性缠绕物的尺寸(例如柔性缠绕物的长度和粗细)以及柔性缠绕物所覆盖的区域。
处理装置15调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制。在本申请中,所述行为控制应用是指根据所设定的信息或指令控制移动机器人导航、进行姿态调整等。利用所述行为控制应用,可使得处理装置15能对移动机器人的移动系统17进行控制。
移动系统17与处理装置15相连,用于基于处理装置15所发出的控制指令而驱动移动机器人移 动。于实际的实施方式中,移动系统17可包括行走机构和驱动机构,其中,所述行走机构可设置于移动机器人的底部,所述驱动机构内置于所述移动机器人的壳体内。进一步地,所述行走机构可采用行走轮方式,在一种实现方式中,所述行走机构可例如包括至少两个万向行走轮,由所述至少两个万向行走轮实现前进、后退、转向、以及旋转等移动。在其他实现方式中,所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,其中,在所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述驱动机构可例如为驱动电机,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。
如前所述,当处理装置15在摄像装置13所拍摄的至少一幅图像中识别出柔性缠绕物时,自存储装置11中调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制。鉴于柔性缠绕物的类别、尺寸、以及位置等信息,处理装置15对移动机器人可采用不同的行为控制方式,其中,所述移动机器人的行为可至少包括但不限于:移动机器人的移动和移动机器人的姿态。
在某些实施例中,处理装置15调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统17发出控制指令以控制移动机器人按照原导航路线移动并越过所述柔性缠绕物。具体地,若处理装置15识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不会干扰到移动机器人的正常运作的情形下,处理装置15向移动系统17发出控制指令以控制移动机器人按照原导航路线移动并越过所述柔性缠绕物。以清洁机器人为例,在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索的直径较小且规则摆放,或者,判断图像中的柔性缠绕物为尺寸较大的布头且所述布头平铺于地面,处理装置15则可控制清洁机器人按照原导航路线移动并越过所述柔性缠绕物。其中,在控制清洁机器人按照原导航路线移动的过程中,可采用多种实施方式。在一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人以原移动速度及原姿态按照原导航路线进行移动。在另一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人改变原移动速度并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。在又一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人改变移动速度和改变姿态并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。
在某些实施例中,处理装置15调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统17发出控制指令以控制移动机器人修改原导航路线并越过所述柔性柔性缠绕物。具体地,若处理装置15识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物在原导航路线下很可能会干扰到移动机器人的 正常运作但通过改变原导航路线则可避免的情形下,处理装置15向移动系统17发出控制指令以控制移动机器人修改原导航路线移动并越过所述柔性缠绕物。以清洁机器人为例,在一示例中,若判断图像中的柔性缠绕物的摆放可能干扰到移动机器人的正常运作(例如线缆、绳索或线头在长度方向上的摆放与原导航路线基本一致,或者线缆、绳索、线头、丝带等恰好位于按照原导航路线下清洁机器人的行走轮或吸尘进气口下方位置),处理装置15则可控制清洁机器人修改原导航路线移动并越过所述柔性缠绕物,例如,修改原导航路线,使得修改后的导航路线与柔性缠绕物的摆放方向相垂直,可使得清洁机器人越过柔性缠绕物,或者,修改原导航路线,使得清洁机器人越过柔性缠绕物时,柔性缠绕物不会处于新的导航轮线中清洁机器人的管轮或或吸尘进气口下方位置。其中,在控制清洁机器人修改原导航路线并安装修改后的导航路线移动的过程中,清洁机器人的移动速度可有不同的实施方式,其移动速度既可以保持不变,也可以增加移动速度或减小移动速度。
在某些实施例中,处理装置15调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统17发出控制指令以控制移动机器人修改原导航路线以避开所述柔性缠绕物。具体地,若处理装置15识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到移动机器人的正常运作的情形下,处理装置15向移动系统17发出控制指令以控制移动机器人修改原导航路线以避开所述柔性缠绕物。以清洁机器人为例,在一示例中,若判断图像中的柔性缠绕物为线缆、绳索、或布头且所述线缆或绳索不规则摆放,或者,判断图像中的柔性缠绕物为线缆或绳索且所述尺寸较大,或者,判断图像中的柔性缠绕物为线头或丝带,处理装置15则可控制清洁机器人修改原导航路线移动,避开所述柔性缠绕物。
在某些实施例中,处理装置15调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统17发出控制指令以控制移动机器人停止移动。具体地,若处理装置15识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到移动机器人的正常运作的情形下或者无法有效判断所述柔性缠绕物对移动机器人的正常运作的干扰程度的情形下,处理装置15向移动系统17发出控制指令以控制移动机器人停止移动。在实际应用中,也可作这样的设置:即,一旦识别出图像中存在有柔性缠绕物,则可省去对所述柔性缠绕物的尺寸、摆放等信息进行计算及判断等操作,可直接向移动系统17发出控制指令以控制移动机器人停止移动。
另外,在某些实施例中,处理装置15调用同时定位与地图构建应用及行为控制应用以对移动机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,忽略所述柔性缠绕物,向移动系统17发出控制指令以控制移动机器人按照原导航路线进行移动。具体地,若处理装置15识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不在原导航路线上的情形下,处理 装置15向移动系统17发出控制指令以控制移动机器人按照原导航路线进行移动。以清洁机器人为例,在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索贴着墙角设置,或者,若判断图像中的柔性缠绕物为线头、丝巾或布头,但所述线头、丝巾或布头在茶几或沙发下而在清洁机器人的导航路线中并不包含对茶几或沙发的清洁,处理装置15则可控制清洁机器人按照原导航路线移动,忽略所述柔性缠绕物。
本申请移动机器人还可包括报警装置(未予图示),所述报警装置与处理装置15相连,用于在处理装置15识别出所述图像中存在柔性缠绕物时发出报警信息。具体地,若处理装置15识别出所述图像中存在柔性缠绕物,处理装置15即向所述报警装置发出控制指令以控制所述报价装置发出报警信息。所述报警装置及其所发出的报警信息可采用多种实施方式或其结合。在一实施方式中,所述报警装置可例如为蜂蜜器,所述蜂鸣器在处理装置15识别出所述图像中存在柔性缠绕物时发出报警声。在另一实施方式中,所述报警装置可例如为报警灯,所述报警灯在处理装置15识别出所述图像中存在柔性缠绕物时发出报警灯光,所述报警灯光可为常亮光或闪烁光。在又一实施方式中,所述报警装置可例如信息发送装置,所述信息发送装置在处理装置15识别出所述图像中存在柔性缠绕物时向网络连接的用户终端(例如智能手机)或室内智能终端(例如智能音箱、智能灯泡、智能显示屏等)发送报警信息。利用所述报警装置,可即时发出发现柔性缠绕物的信息,以供后续由操作人员移除柔性缠绕物以排除障碍。
本申请移动机器人,在移动机器人的工作模式下,通过拍摄以获取包含地面的图像,对所述图像进行识别,在识别出所述图像中存在柔性缠绕物时调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制。利用本申请移动机器人,可针对柔性缠绕物进行有效地检测,并可根据检测结果对移动机器人的行为进行相应的控制。
请参见图3,显示为本申请清洁机器人在一实施例中的结构示意图。如图3所示,所述清洁机器人包括:摄像装置21、控制系统23、移动系统25、以及清洁系统27,其中,控制系统23更包括存储装置230和处理装置232。
摄像装置21用于在清洁机器人的工作模式下获取操作环境的图像。摄像装置21包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。摄像装置21的供电系统可受清洁机器人的供电系统控制,当清洁机器人上电移动期间,摄像装置21即开始拍摄图像,并提供给处理装置232。例如,清洁机器人中的摄像装置将所拍摄的室内图像以预设视频格式缓存在存储装置230中,并由处理装置232获取。摄像装置21用于在清洁机器人移动期间拍摄图像。在此,在某些实施方式中,摄像装置21可设置于清洁机器人的顶面。例如,清洁机器人中的摄像装置设置于其壳体的顶面的中部、或边缘上。摄像装置的视野光学轴相对于垂线为±30°。例如,清洁机器人的摄像装置的光学轴相对于垂线的夹角为-30°、-29°、-28°、 -27°、……、-1°、0°、1°、2°、……、29°、或30°。在某些实施方式中,摄像装置21可设置于清洁机器人的顶面与侧面的交接处。例如,在清洁机器人壳体的顶面与侧面的交接处设置至少一个凹陷结构(所述凹陷结构可设于壳体的前端、后端或者侧端),将摄像装置设置于所述凹陷结构内。摄像装置中的镜头光学轴与所述壳体的顶部表面所定义的平面(所述壳体的顶部表面所定义的平面可与水平面相一致,即当将所述清洁机器人平稳放置于一水平面时,所述壳体的顶部表面所定义的平面与所述水平面相平行)的夹角α为61°至85°,即摄像装置中的镜头光学轴与所述壳体的顶部表面所定义的平面的夹角α为61°、62°、63°、64°、65°、66°、67°、68°、69°、70°、71°、72°、73°、74°、75°、76°、77°、78°、79°、80°、81°、82°、83°、84°、85°。摄像装置中的镜头为前倾设计,可捕捉到更多的环境信息。例如,前倾设计的摄像装置相比于镜头竖直朝上的摄像装置能更多地捕捉到清洁机器人前方的环境图像,比如,清洁机器人前方的部分地面区域。需要说明的是,本领域技术人员应该理解,上述光学轴与垂线或壳体的顶部表面的夹角的取值为整数但并非限制其夹角精度为1°的范围内,根据实际清洁机器人的设计需求,所述夹角的精度可更高,如达到0.1°、0.01°以上等,在此不做无穷尽的举例。
存储装置230存储有同时定位与地图构建应用及行为控制应用。
所述同时定位与地图构建应用即SLAM(Simultaneous Localization And Mapping)应用,SLAM应用是智能机器人领域中的基础应用。清洁机器人的定位技术可以包括允许清洁机器人相对于其周边确定其位置和导向(或“姿态”)的过程,可以构件其周边地图的清洁机器人可以在地图中定位其本身,以展示自主程度。问题可以描述为当清洁机器人在未知环境中时,是否有办法让清洁机器人一边逐步描绘出此环境完全的地图,同时一边决定清洁机器人应该往哪个方向行进,也就是说,要实现智能化需要完成三个任务,第一个是定位(Localization),第二个是建图(Mapping),第三个则是随后的路径规划(Navigation)。本申请中的行为控制应用是指根据所设定的信息或指令控制清洁机器人导航、进行导向(或“姿态”)调整等。“姿态”在本文中包括清洁机器人在移动空间内的位置(例如,x、y坐标地点),以及相对于例如在移动空间内的基本物(例如墙壁)或基本方向的所述清洁机器人的角度导向。需说明的是,为了补偿基于SLAM技术所构建的地图的误差,一种基于视觉同时定位与地图构建的技术(Visual Simultaneous Localization and Mapping,简称VSLAM)可利用基于图像传感器中的图像数据对传感器所提供的移动信息的误差进行补偿,可为清洁机器人提供更精准的导航能力。
所述行为控制应用是智能机器人领域中的基础应用,其与处理装置232和移动系统25相关联。利用所述行为控制应用,可使得处理装置232能对移动系统25进行控制。在实际应用中,所述行为控制应用可与前述的SLAM应用相结合,那么,处理装置232可根据SLAM应用所获得定位信息和地图信息来向移动系统25发出控制指令以令移动系统25执行相应的行为。“行为”在本文中包括清 洁机器人的移动和姿态。
此外,存储装置230还预存有至少一个标准件的标准物理特征。其中,所述标准件可包括基于行业标准、国家标准、国际标准、和自定义标准中的至少一种标准而设计的标准件。例如,行业标准如机械行业标准JB、建筑材料行业标准JC等;国家标准如中国GB标准、德国DIN标准、英国BS标准等;国际标准如国际ISO标准;自定义标准稍后详述。所述标准物理特征可包括轮廓尺寸、标准结构关系等,例如,标准件的标准物理特征包括标准件实际物理上的长、宽、高,标准件中对应标准的实际物理上的其他尺寸数据等。例如,电源插座上两孔间距。又如电源插座的长宽值。再如底板的长宽值或地砖的长宽值。还比如地毯的长宽值及厚度。
在此,存储装置230包括但不限于:只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、非易失性存储器(Nonvolatile RAM,简称NVRAM)。例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。在某些实施例中,存储装置230还可以包括远离一个或多个处理器的存储器,例如,经由RF电路或外部端口以及通信网络(未示出)访问的网络附加存储器,其中所述通信网络可以是因特网、一个或多个内部网、局域网(LAN)、广域网(WLAN)、存储局域网(SAN)等,或其适当组合。存储器控制器可控制清洁机器人的诸如中央处理器(CPU)和外设接口之类的其他组件对存储装置的访问。
处理装置232包括一个或多个处理器。处理装置232可操作地与存储装置230中的只读存储器、随机存取存储器和/或非易失性存储器耦接。处理装置232可执行在只读存储器、随机存取存储器和/或非易失性存储器中存储的指令以在机器人中执行操作,诸如提取图像中的特征并基于特征在地图中进行定位、或者获取图像并对图像进行识别等。如此,处理器可包括一个或多个通用微处理器、一个或多个专用处理器(ASIC)、一个或多个数字信号处理器(Digital Signal Processor,简称DSP)、一个或多个现场可编程逻辑阵列(Field Programmable Gate Array,简称FPGA)、或它们的任何组合。处理装置232还与I/O端口和输入结构可操作地耦接,该I/O端口可使得清洁机器人能够与各种其他电子设备进行交互,该输入结构可使得用户能够与计算设备进行交互。因此,输入结构可包括按钮、键盘、鼠标、触控板等。所述其他电子设备可以是所述清洁机器人中移动装置中的移动电机,或清洁机器人中专用于控制移动装置和清扫装置的从处理器,如微控制单元(Microcontroller Unit,简称MCU)。
在一种示例中,处理装置232通过数据线分别连接存储装置230和摄像装置21。处理装置232通过数据读写技术与存储装置230进行交互,处理装置232通过接口协议与摄像装置21进行交互。其中,所述数据读写技术包括但不限于:高速/低速数据接口协议、数据库读写操作等。所述接口协议包括但不限于:HDMI接口协议、串行接口协议等。
处理装置232用于在清洁机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的 图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述清洁机器人的行为进行控制。
处理装置232用以从摄像装置21所拍摄的图像中获取至少一幅图像,并对所述至少一幅图像进行识别以检测所述至少一幅图像中是否存在柔性缠绕物。
如前所述,一般地,清洁机器人在工作模式下在工作面(例如地面)上移动时,对地面上的柔性缠绕物的检测存在缺失。因此,在本申请中,在清洁机器人的工作模式下,可控制摄像装置21进行拍摄以获取包含地面的图像,这里的“地面”更可具体为清洁机器人对照行走路径而在后续会移动到的地面。
对所述至少一幅图像进行柔性缠绕物的识别是利用一柔性缠绕物图像分类器实现的,即,在识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中即可输出识别结果。在本实施例中,所述柔性缠绕物图像分类器是经卷积神经网络训练得到的。
在某些实施例中,所述训练可包括:首先,制作训练样本集,采集含有符合预设规则的柔性缠绕物的图像作为训练样本。之后,根据制作的所述训练样本集进行训练,得到柔性缠绕物图像分类器。其中,在制作训练样本集时,在一种实施方式中,可自行采集符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的柔性缠绕物的图像或自行拍摄相关的柔性缠绕物的图像,并从中选取符合预设规则的典型的柔性缠绕物的图像并将其作为训练样本。而在其他实施方式中,也可从已有的各类柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像作为训练样本,例如,分别从不同的柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像,将它们组合后形成训练样本集,或者,从不同的柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为训练样本集。在这里,作为训练样本的包含柔性缠绕物的图像可以是背景单一(例如,背景为单一纯色)的简单图像也可以是在实物背景下的图像。由于在本申请中,清洁机器人控制摄像装置21进行拍摄所获取的是包含地面的图像,因此,作为训练样本的图像可以是包含有柔性缠绕物的地面图像。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的训练样本集,即,可制作对应线缆的线缆训练样本集(例如,各类线缆在不同地面上以不同型态呈现的图像)、对应绳索的绳索训练样本集(例如,各类绳索在不同地面上以不同型态呈现的图像)、对应丝带的丝带训练样本集(例如,各类丝带在不同地面上以不同型态呈现的图像)、对应布头的布头训练样本集(例如,各类布头在不同地面上以不同型态呈现的图像)、对应棉絮的棉絮训练样本集(例如,各类棉絮在不同地面上以不同型态呈现的图像)、对应植物藤蔓的植物藤蔓训练样本集(例如,各类植物藤蔓在不同地 面上以不同型态呈现的图像)等。另外,可补充说明的是,在将制作的训练样本集进行训练之前,还可对训练样本集中的图像进行相应的图像预处理。在某些实施例中,所述图像预处理包括但不限于:将训练样本集中的图像进行裁剪、压缩、灰度处理、图像滤波和/或噪声过滤处理等。
在某些实施例中,所述训练可包括:首先,制作训练样本集,采集含有符合预设规则的柔性缠绕物的图像作为正样本以及采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像作为负样本。之后,根据制作的训练样本集进行训练,得到柔性缠绕物图像分类器。其中,针对采集含有符合预设规则的柔性缠绕物的图像作为正样本,在一种实施方式中,可自行采集符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的柔性缠绕物的图像或自行拍摄相关的柔性缠绕物的图像,并从中选取符合预设规则的典型的柔性缠绕物的图像并将其作为正样本。而在其他实施方式中,也可从已有的各类柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像作为正样本,例如,分别从不同的柔性缠绕物的标准库中选取部分或全部的柔性缠绕物的图像,将它们组合后形成正样本集,或者,从不同的柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为正样本。在这里,作为正样本的包含柔性缠绕物的图像可以是背景单一(例如,背景为单一纯色)的简单图像也可以是在实物背景下的图像。由于在本申请中,清洁机器人控制摄像装置21进行拍摄所获取的是包含地面的图像,因此,作为正样本的图像可以是包含有柔性缠绕物的地面图像。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的正样本,即,可制作对应线缆的线缆正样本集(例如,各类线缆在不同地面上以不同型态呈现的图像)、对应绳索的绳索正样本集(例如,各类绳索在不同地面上以不同型态呈现的图像)、对应丝带的丝带正样本集(例如,各类丝带在不同地面上以不同型态呈现的图像)、对应布头的布头正样本集(例如,各类布头在不同地面上以不同型态呈现的图像)、对应棉絮的棉絮正样本集(例如,各类棉絮在不同地面上以不同型态呈现的图像)、对应植物藤蔓的植物藤蔓正样本集(例如,各类植物藤蔓在不同地面上以不同型态呈现的图像)等。针对采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像作为负样本,在一种实施方式中,可自行采集不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像,例如,从网络中搜索相关的不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像或自行拍摄不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像,并从中选取不含有柔性缠绕物或含有不符合预设规则的柔性缠绕物的图像并将其作为负样本。而在其他实施方式中,也可从已有的各类不含有柔性缠绕物的标准库中选取部分或全部的图像作为负样本,例如,分别从不同的不含有柔性缠绕物的标准库中选取部分或全部的图像,将它们组合后形成负样本集,或者,从不同的不含有 柔性缠绕物的标准库选择至少一个标准库,将选中的至少一个标准库中的部分或全部的图像确定为负样本。落实到具体的柔性缠绕物,在本申请中,所述柔性缠绕物包括但不限于以下几类:线缆、绳索、丝带、鞋带、毛巾、布头、棉絮、植物藤蔓等。至于地面,根据实际应用环境,所述地面包括但不限于以下几类:水泥地面、涂漆的地面、铺设复合地板的地面、铺设实木地板的地面、铺设地毯的地面等。因此,针对特定的柔性缠绕物,可制作对应的负样本集,即,可制作对应线缆的线缆负样本集(例如,不含有线缆或含有不符合预设规则的线缆在不同地面上的图像)、对应绳索的绳索负样本集(例如,不含有绳索或含有不符合预设规则的绳索在不同地面上的图像)、对应丝带的丝带负样本集(例如,不含有丝带或含有不符合预设规则的丝带在不同地面上的图像)、对应布头的布头负样本集(例如,不含有布头或含有不符合预设规则的布头在不同地面上的图像)、对应棉絮的棉絮负样本集(例如,不含有棉絮或含有不符合预设规则的棉絮在不同地面上的图像)、对应植物藤蔓的植物藤蔓负样本集(例如,不含有植物藤蔓或含有不符合预设规则的植物藤蔓在不同地面上的图像)等。另外,可补充说明的是,在将制作的训练样本集进行训练之前,还可对训练样本集中的图像进行相应的图像预处理。在某些实施例中,所述图像预处理包括但不限于:将训练样本集中的图像进行截取、压缩、灰度处理、图像滤波和/或噪声过滤处理等。
后续,即可利用训练得到的柔性缠绕物图像分类器对图像进行识别。在本申请中,在进行图像识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中,即可由柔性缠绕物图像分类器输出相应的识别结果。在某些实施例中,利用柔性缠绕物图像分类器对图像进行识别可至少包括如下步骤:对待识别的图像进行图像预处理;对图像预处理后的所述图像进行特征提取;将待识别的图像的特征输入到柔性缠绕物图像分类器,得到识别结果。
其中,对待识别的图像进行图像预处理包括但不限于:对待识别的图像进行裁剪、压缩、灰度处理、阈值化处理等,当然,所述预处理还可包括图像滤波、噪声滤波处理等。以灰度处理和阈、值化处理为例,对待识别的图像进行灰度处理以得到灰度图像,对灰度处理后的灰度图像进行阈值化处理(例如,所述灰度图像经二值化处理后可变成能反映图像整体和局部特征的二值化图像,即黑白图像)。对图像预处理后的所述图像进行特征提取包括但不限于:提取待识别的图像的轮廓特征、纹理特征等。
值得注意的是:在某些实施例中,前述用于进行柔性缠绕物识别的柔性缠绕物图像分类器可预存于存储装置230中。在一种实现方式中,在清洁机器人在出售给终端用户之前(例如,在清洁机器人被制造出厂之前,或,清洁机器人被下发到各个销售点之前,或清洁机器人在销售点被贩售给终端用户之前),柔性缠绕物图像分类器被写入到存储装置230中,一般地,可对柔性缠绕物图像分类器设置权限,禁止终端用户对其进行改动。当然,并不以此为限,例如,所述柔性缠绕物图像分类器也可开放部分权限或全部权限,可允许终端用户对其进行改动(例如,修改或增减操作等)。或 者,所述柔性缠绕物图像分类器也可在清洁机器人联网并与相应的厂商服务器或应用服务商服务器建立通信连接后进行更新操作。在其他实施方式中,所述柔性缠绕物图像可存储于与清洁机器人远程通信的云端系统中,如此,在进行图像识别时,处理装置232可从摄像装置21所拍摄的图像中获取至少一幅图像并将所述至少一幅图像发送至与清洁机器人远程通信的云端系统中,由云端系统中的柔性缠绕物图像分类器对所述至少一幅图像进行识别并将识别结果再远程发送给清洁机器人。
因此,利用处理装置232可从摄像装置21所拍摄的图像中获取至少一幅图像并利用柔性缠绕物图像分类器对所述至少一幅图像进行识别,由此可检测出所述至少一幅图像中是否存在柔性缠绕物且获得所存在的柔性缠绕物的具体类别。
处理装置232还可用于在识别出图像中存在柔性缠绕物时自存储装置230中调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制。
处理装置232用于调用所述定位与地图构建应用以执行:获取前、后时刻的至少两幅图像中相匹配特征的位置,并依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定清洁机器人的位置及姿态。
在本申请中,存储装置230还存储有图像坐标系与物理空间坐标系的对应关系。其中,所述图像坐标系是基于图像像素点而构建的图像坐标系,摄像装置21所拍摄的图像中各个图像像素点的二维坐标参数可由所述图像坐标系描述。所述图像坐标系可为直角坐标系或极坐标系等。所述物理空间坐标系及基于实际二维或三维物理空间中各位置而构建的坐标系,其物理空间位置可依据预设的图像像素单位与单位长度(或单位角度)的对应关系而被描述在所述物理空间坐标系中。所述物理空间坐标系可为二维直角坐标系、极坐标系、球坐标系、三维直角坐标系等。
对于所使用场景的地面复杂度不高的清洁机器人来说,该对应关系可在出厂前预存在所述存储装置中。然而,对于使用场景的地面复杂度较高的清洁机器人,例如清洁机器人来说,可利用在所使用的场地进行现场测试的方式得到所述对应关系并保存在存储装置230中。在某些实施方式中,所述清洁机器人还包括移动传感装置(未予图示),用于获取机器人的移动信息。其中,所述移动传感装置包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等。在机器人移动期间,移动传感装置不断侦测移动信息并提供给处理装置。所述位移传感器、陀螺仪、速度传感器等可被集成在一个或多个芯片中。所述测距传感器和悬崖传感器可设置在机器人的体侧。例如,清洁机器人中的测距传感器被设置在壳体的边缘;清洁机器人中的悬崖传感器被设置在机器人底部。根据机器人所布置的传感器的类型和数量,处理装置所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
为了构建所述对应关系,在某些实施方式中,所述清洁机器人还包括初始化装置(未予图示),所述初始化装置可基于前后时刻的至少两幅图像中相匹配特征的位置和自所述前一时刻至所述当前 时刻所获取的移动信息,构建所述对应关系。在此,所述初始化装置可以是一种程序模块,其程序部分存储在存储装置中,并经由处理装置的调用而被执行。当所述存储装置中未存储所述对应关系时,所述处理装置调用初始化装置以构建所述对应关系。
在此,初始化装置在机器人移动期间获取所述移动传感装置所提供的移动信息以及获取摄像装置21所拍摄的各个图像。为了减少移动传感装置的累积误差,所述初始化装置可在机器人移动的一小段时间内获取所述移动信息和至少两幅图像。例如,所述初始化装置在监测到机器人处于直线移动时,获取所述移动信息和至少两幅图像。又如,所述初始化装置在监测到机器人处于转弯移动时,获取所述移动信息和至少两幅图像。其中,在转弯移动时获取至少两幅图像的间隔时间可比在直线移动时获取至少两幅图像的间隔时间要短。
接着,所述初始化装置对各个图像中的特征进行识别和匹配并得到相匹配特征在各个图像中的图像位置。其中特征包括但不限于角点特征、边缘特征、直线特征、曲线特征等。例如,所述初始化装置可依据一跟踪装置(未予图示)来获取相匹配特征的图像位置。所述跟踪装置用于跟踪前后时刻至少两幅图像中包含相同特征的位置。
所述初始化装置再根据所述图像位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述初始化装置可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述初始化装置可依据所拍摄前一时刻图像所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
需要说明的是,所述初始化装置的工作过程可以基于用户的指令来执行,或对用户透明。例如,所述初始化装置的执行过程是基于存储装置230中未存储所述对应关系、或所述对应关系需要被更新时而启动的。在此不做限制。
所述对应关系可由对应算法的程序、数据库等方式保存在所述存储装置中。为此,存储在存储器中的软件组件包括操作系统、通信模块(或指令集)、接触/运动模块(或指令集)、图形模块(或指令集)、以及应用(或指令集)。此外,存储装置还保存有包含摄像装置所拍摄的图像、处理装置在进行定位运算时所得到的位置及姿态在内的临时数据或持久数据。
在构建了所述对应关系后,所述处理装置获取当前时刻图像和前一时刻图像中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态。
在此,处理装置232可按照预设的时间间隔或图像数量间隔获取前一时刻t1和当前时刻t2的两幅图像,识别并匹配两幅图像中的特征。其中,根据所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边 缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像中包含预设灰度范围的区域尺寸等。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,处理装置232根据所识别的特征在各自图像中位置,从所识别出的特征中寻找能够匹配的特征。例如,请参阅图2,显示为在t1时刻和t2时刻所获取的两幅图像中相匹配特征的位置变化关系示意图。处理装置232在识别出各个图像中的特征后,确定图像P1中包含特征a1和a2,图像P2中包含特征b1、b2和b3,且特征a1与b1和b2均属于同一特征,特征a2与b3属于同一特征,处理装置232可先确定在图像P1中的特征a1位于特征a2的左侧且间距为d1像素点;同时还确定在图像P2中的特征b1位于特征b3的左侧且间距为d1’像素点,以及特征b2位于特征b3右侧且间距为d2’像素点。处理装置232根据特征b1与b3的位置关系、特征b2与b3的位置关系分别与特征a1与a2的位置关系,以及特征b1与b3的像素间距、特征b2与b3的像素间距分别与特征a1与a2的像素间距进行匹配,从而得到图像P1中特征a1与图像P2中特征b1相匹配,特征a2与特征b3相匹配。以此类推,处理装置232将所匹配的各特征,以便于依据各所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
在此,处理装置232可以根据所述对应关系,确定两幅图像中多个特征的图像位置偏移信息、或确定多个特征在物理空间中的物理位置偏移信息,并综合所得到的任一种位置偏移信息来计算机器人自t1时刻至t2时刻的相对位置及姿态。例如,通过坐标变换,处理装置232得到机器人从拍摄图像P1时刻t1至拍摄图像P2时刻t2的位置和姿态为:在地面上移动了m长度以及向左旋转了n度角。
如此,当清洁机器人已建立地图时,依据处理装置232得到的位置及姿态可帮助清洁机器人确定是否在导航的路线上。当清洁机器人未建立地图时,依据处理装置232得到的位置及姿态可帮助清洁机器人确定相对位移和相对转角,并借此数据进行地图绘制。
处理装置232还用于调用所述定位与地图构建应用以执行:获取至少一幅图像,依据所述至少一幅图像中特征的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
处理装置232可采用基于卷积神经网络的图像识别方法、基于小波矩的图像识别方法等图像识别方法对摄像装置21所拍摄的图像进行处理、分析和理解,以识别各种不同模式的目标和对象。此外,所述处理装置还可通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来寻求相似图像目标。
在一实施例中,由于清洁机器人通常进行室内清洁工作,因而通过摄像装置所拍摄的图像中的 实物一般会包括例如墙、桌、沙发、衣柜、电视机、电源插座、网线插座等。在本示例中,首先,摄像装置21在清洁机器人的导航操作环境下拍摄图像之后将所述图像提供给处理装置232,处理装置232通过图像识别来识别所拍摄的图像中实物的图形。其中,所述实物的图形可以由实物的灰度、实物的轮廓等特征表征。同时,所述实物的图形并不限于实物的外部几何图形,还可包括实物上呈现出的其他图形,例如电源插座上的二孔插口、五孔插口,网线插座上的方形插口等。鉴于此,例如,对于外部几何图形相近的电源插座和网线插座,则可利用电源插座的五孔插口与网线插座的方形插口来辨别。此外,在清洁机器人的摄像装置在室内可拍摄的图像中的实物包括电源插座、网线插座的情况下,由于电源插座、网线插座是根据GB标准设计的,因而不会因其所处的环境不同而有所变化,因此,可以作为标准件。标准件的标准物理特征可包括电源插座的长、宽、高,电源插座上五孔插口的结构关系等。在某些实施方式中,标准件的图形和标准件的标准物理特征可以是预设的,并且利用机器人的存储装置预先存储。因此,获取标准件的标准物理特征的方式包括自机器人的存储装置中读取预设的标准物理特征。其中,所述标准件可包括基于行业标准、国家标准、国际标准、和自定义标准中的至少一种标准而设计的标准件。例如,行业标准如机械行业标准JB、建筑材料行业标准JC等;国家标准如中国GB标准、德国DIN标准、英国BS标准等;国际标准如国际ISO标准;自定义标准稍后详述。所述标准物理特征可包括轮廓尺寸、标准结构关系等,例如,标准件的标准物理特征包括标准件实际物理上的长、宽、高,标准件中对应标准的实际物理上的其他尺寸数据等。例如,电源插座上两孔间距。又如电源插座的长宽值。再如底板的长宽值或地砖的长宽值。还比如地毯的长宽值及厚度。
此外,针对所识别的图像中实物的图形和所存储的标准件的图形,处理装置232通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的标准件的图形是否对应,当所识别的至少一个图形与所存储的标准件的图形对应时,获取标准件的标准物理特征。其中,与所存储的标准件的图形对应的所述至少一个图形被称为标准图形。以电源插座为例,存储装置230存储有标准电源插座图形,处理装置232通过对图像容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的电源插座的图形是否对应,当所识别的至少一个图形与所存储的电源插座的图形对应时,获取电源插座的标准物理特征。
因此,基于预设的单位像素间隔与实际物理空间中单位长度的对应关系以及所识别的标准图形的尺寸和所对应的标准物理特征中的实物尺寸,处理装置232即可计算出图像中存在的柔性缠绕物在当前物理空间内的位置及其尺寸信息。以设置在墙上的插座为例,当处理装置识别出插座以及墙与底面的交界线、或识别出插座并默认插座被安装在墙上时,按照上述对应关系,处理装置232不仅可得到柔性缠绕物在当前物理空间内的位置(例如,所述柔性缠绕物与插座的距离和偏角,所述 清洁机器人相距插座的距离和偏角,所述清洁机器人相距柔性缠绕物的距离和偏角等),处理装置232还可以利用插座的标准量度(例如插座的边框的长宽值或插座中插孔的间距等)的空间位置关系得到柔性缠绕物的尺寸(例如柔性缠绕物的长度和粗细)以及柔性缠绕物所覆盖的区域。
处理装置232调用同时定位与地图构建应用及行为控制应用以对所述清洁机器人的行为进行控制。在本申请中,所述行为控制应用是指根据所设定的信息或指令控制清洁机器人导航、进行姿态调整等。利用所述行为控制应用,可使得处理装置232能对清洁机器人的移动系统25进行控制。
移动系统25与控制系统23相连,用于基于控制系统23所发出的控制指令而驱动清洁机器人移动。在本实施例中,移动系统25与控制系统23中的处理装置232相连,用于基于处理装置232所发出的控制指令而驱动清洁机器人移动。于实际的实施方式中,移动系统25可包括行走机构和驱动机构,其中,所述行走机构可设置于清洁机器人的底部,所述驱动机构内置于所述清洁机器人的壳体内。进一步地,所述行走机构可采用行走轮方式,在一种实现方式中,所述行走机构可例如包括至少两个万向行走轮,由所述至少两个万向行走轮实现前进、后退、转向、以及旋转等移动。在其他实现方式中,所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,其中,在所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述驱动机构可例如为驱动电机,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。
清洁系统27与所述控制系统相连,清洁系统27与控制系统23相连,用以在所述移动机器人移动时基于控制系统23所发出的控制指令而对地面执行清洁作业。在本实施例中,清洁系统27与控制系统23中的处理装置232相连,用以在所述移动机器人移动时基于处理装置232所发出的控制指令而对地面执行清洁作业。于实际的实施方式中,所述清洁系统可至少包括清扫组件和吸尘组件。所述清扫组件可包括位于壳体底部的清洁边刷以及与用于控制所述清洁边刷的边刷电机,其中,所述清洁边刷的数量可为至少两个,分别对称设置于壳体前端的相对两侧,所述清洁边刷可采用旋转式清洁边刷,可在所述边刷电机的控制下作旋转。所述吸尘组件可包括集尘室和吸尘器,其中,所述集尘室内置于壳体,所述吸尘器的出气口与所述集尘室连通,所述吸尘器的进气口设于壳体的底部。当然,清洁系统27并不以此为限,在其他实施方式中,清洁系统27还可包括例如拖地装置、喷雾装置等。
如前所述,当处理装置232在摄像装置21所拍摄的至少一幅图像中识别出柔性缠绕物时,自存储装置230中调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制。鉴于柔性缠绕物的类别、尺寸、以及位置等信息,处理装置232对清洁机器人可采用不同的行为控制方 式,其中,所述清洁机器人的行为可至少包括但不限于:清洁机器人的移动和清洁机器人的姿态。
在某些实施例中,处理装置232调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统25发出控制指令以控制清洁机器人按照原导航路线移动并越过所述柔性缠绕物。具体地,若处理装置232识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不会干扰到清洁机器人的正常运作的情形下,处理装置232向移动系统25发出控制指令以控制清洁机器人按照原导航路线移动并越过所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索的直径较小且规则摆放,或者,判断图像中的柔性缠绕物为尺寸较大的布头且所述布头平铺于地面,处理装置232则可控制清洁机器人按照原导航路线移动并越过所述柔性缠绕物。其中,在控制清洁机器人按照原导航路线移动的过程中,可采用多种实施方式。在一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人以原移动速度及原姿态按照原导航路线进行移动。在另一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人改变原移动速度并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。在又一实施方式中,控制清洁机器人按照原导航路线移动为控制清洁机器人改变移动速度和改变姿态并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。
在某些实施例中,处理装置232调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统25发出控制指令以控制清洁机器人修改原导航路线并越过所述柔性柔性缠绕物。具体地,若处理装置232识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物在原导航路线下很可能会干扰到清洁机器人的正常运作但通过改变原导航路线则可避免的情形下,处理装置232向移动系统25发出控制指令以控制清洁机器人修改原导航路线移动并越过所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物的摆放可能干扰到清洁机器人的正常运作(例如线缆、绳索或线头在长度方向上的摆放与原导航路线基本一致,或者线缆、绳索、线头、丝带等恰好位于按照原导航路线下清洁机器人的行走轮或吸尘进气口下方位置),处理装置232则可控制清洁机器人修改原导航路线移动并越过所述柔性缠绕物,例如,修改原导航路线,使得修改后的导航路线与柔性缠绕物的摆放方向相垂直,可使得清洁机器人越过柔性缠绕物,或者,修改原导航路线,使得清洁机器人越过柔性缠绕物时,柔性缠绕物不会处于新的导航轮线中清洁机器人的管轮或或吸尘进气口下方位置。其中,在控制清洁机器人修改原导航路线并安装修改后的导航路线移动的过程中,清洁机器人的移动速度可有不同的实施方式,其移动速度既可以保持不变,也可以增加移动速度或减小移动速度。
在某些实施例中,处理装置232调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统25发出控制指令以控制清洁机 器人修改原导航路线以避开所述柔性缠绕物。具体地,若处理装置232识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到清洁机器人的正常运作的情形下,处理装置232向移动系统25发出控制指令以控制清洁机器人修改原导航路线以避开所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物为线缆、绳索、或布头且所述线缆或绳索不规则摆放,或者,判断图像中的柔性缠绕物为线缆或绳索且所述尺寸较大,或者,判断图像中的柔性缠绕物为线头或丝带,处理装置232则可控制清洁机器人修改原导航路线移动,避开所述柔性缠绕物。
在某些实施例中,处理装置232调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,向移动系统25发出控制指令以控制清洁机器人停止移动。具体地,若处理装置232识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到清洁机器人的正常运作的情形下或者无法有效判断所述柔性缠绕物对清洁机器人的正常运作的干扰程度的情形下,处理装置232向移动系统25发出控制指令以控制清洁机器人停止移动。在实际应用中,也可作这样的设置:即,一旦识别出图像中存在有柔性缠绕物,则可省去对所述柔性缠绕物的尺寸、摆放等信息进行计算及判断等操作,可直接向移动系统25发出控制指令以控制清洁机器人停止移动。
另外,在某些实施例中,处理装置232调用同时定位与地图构建应用及行为控制应用以对清洁机器人的行为进行控制的方式可包括:基于柔性缠绕物的信息,忽略所述柔性缠绕物,向移动系统25发出控制指令以控制清洁机器人按照原导航路线进行移动。具体地,若处理装置232识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不在原导航路线上的情形下,处理装置232向移动系统25发出控制指令以控制清洁机器人按照原导航路线进行移动。在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索贴着墙角设置,或者,若判断图像中的柔性缠绕物为线头、丝巾或布头,但所述线头、丝巾或布头在茶几或沙发下而在清洁机器人的导航路线中并不包含对茶几或沙发的清洁,处理装置232则可控制清洁机器人按照原导航路线移动,忽略所述柔性缠绕物。
本申请清洁机器人还可包括报警装置(未予图示),所述报警装置与处理装置232相连,用于在处理装置232识别出所述图像中存在柔性缠绕物时发出报警信息。具体地,若处理装置232识别出所述图像中存在柔性缠绕物,处理装置232即向所述报警装置发出控制指令以控制所述报价装置发出报警信息。所述报警装置及其所发出的报警信息可采用多种实施方式或其结合。在一实施方式中,所述报警装置可例如为蜂蜜器,所述蜂鸣器在处理装置232识别出所述图像中存在柔性缠绕物时发出报警声。在另一实施方式中,所述报警装置可例如为报警灯,所述报警灯在处理装置232识别出所述图像中存在柔性缠绕物时发出报警灯光,所述报警灯光可为常亮光或闪烁光。在又一实施方式中,所述报警装置可例如信息发送装置,所述信息发送装置在处理装置232识别出所述图像中存在 柔性缠绕物时向网络连接的用户终端(例如智能手机)或室内智能终端(例如智能音箱、智能灯泡、智能显示屏等)发送报警信息。利用所述报警装置,可即时发出发现柔性缠绕物的信息,以供后续由操作人员移除柔性缠绕物以排除障碍。
本申请清洁机器人,在清洁机器人的工作模式下,通过拍摄以获取包含地面的图像,对所述图像进行识别,在识别出所述图像中存在柔性缠绕物时调用同时定位与地图构建应用及行为控制应用以对所述清洁机器人的行为进行控制。利用本申请清洁机器人,可针对柔性缠绕物进行有效地检测,并可根据检测结果对清洁机器人的行为进行相应的控制。
请参见图4,显示为本申请移动机器人的控制方法在一实施例中的流程示意图。本申请移动机器人的控制方法应用于移动机器人中,所述移动机器人具有摄像装置和移动系统。如图4所示,本申请移动机器人的控制方法如下步骤:
步骤41,在移动机器人的工作模式下,控制摄像装置进行拍摄以获取包含地面的图像。
在此,可利用摄像装置在移动机器人的导航操作环境下摄取图像。其中,所述摄像装置包括但不限于:照相机、视频摄像机、集成有光学系统或CCD芯片的摄像模块、集成有光学系统和CMOS芯片的摄像模块等。所述摄像装置的供电系统可受移动机器人的供电系统控制,当移动机器人上电移动期间,所述摄像装置即开始摄取图像。
在本申请中,在移动机器人的工作模式下,可控制摄像装置进行拍摄以获取包含地面的图像,这里的“地面”更可具体为移动机器人对照行走路径而在后续会移动到的地面。以清洁机器人为例,在某些实施例中,可利用摄像装置进行拍摄以获取位于清洁机器人移动方向前方的地面的图像。
此外,所述摄像装置可以设于机器人的主体上。以扫地机器人为例,在某些实施方式中,所述摄像装置可设置于移动机器人的顶面。例如,清洁机器人中的摄像装置设置于其壳体的顶面的中部、或边缘上。摄像装置的视野光学轴相对于垂线为±30°。在某些实施方式中,所述摄像装置可设置于移动机器人的顶面与侧面的交接处。例如,在清洁机器人壳体的顶面与侧面的交接处设置至少一个凹陷结构(所述凹陷结构可设于壳体的前端、后端或者侧端),将摄像装置设置于所述凹陷结构内。摄像装置中的镜头光学轴与所述壳体的顶部表面所定义的平面(所述壳体的顶部表面所定义的平面可与水平面相一致,即当将所述移动机器人平稳放置于一水平面时,所述壳体的顶部表面所定义的平面与所述水平面相平行)的夹角α为61°至85°。摄像装置中的镜头为前倾设计,可捕捉到更多的环境信息。例如,前倾设计的摄像装置相比于镜头竖直朝上的摄像装置能更多地捕捉到清洁机器人前方的环境图像,比如,清洁机器人前方的部分地面区域。
所述导航操作环境是指移动机器人依据利用已构建的地图数据而设计的导航路线、或基于随机设计的导航路线移动并进行相应操作的环境。以扫地机器人为例,导航操作环境指扫地机器人依据导航路线移动并进行清洁操作的环境。
步骤S43,对摄像装置所拍摄的至少一幅包含地面的图像进行识别,在识别出图像中存在柔性缠绕物时对移动机器人的行为进行控制。
在步骤S43中,通过对至少一幅包含地面的图像进行识别,并在识别出所述图像中存在柔性缠绕物时,根据所述柔性缠绕物的信息及移动机器人的位置信息,对移动机器人的行为进行相应的控制。
请参见图5,显示为图4的细化流程示意图。
请参阅图5,步骤S43更包括如下细化步骤:
步骤S431,从摄像装置所拍摄的图像中获取至少一幅包含地面的图像。
步骤S433,利用柔性缠绕物图像分类器对所述至少一幅包含地面的图像进行识别以得到识别结果。
对所述至少一幅图像进行柔性缠绕物的识别是利用一柔性缠绕物图像分类器实现的,即,在识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中即可输出识别结果。在本实施例中,所述柔性缠绕物图像分类器是经卷积神经网络训练得到的。针对特定的柔性缠绕物,可制作对应的训练样本集,即,可制作对应线缆的线缆训练样本集(例如,各类线缆在不同地面上以不同型态呈现的图像)、对应绳索的绳索训练样本集(例如,各类绳索在不同地面上以不同型态呈现的图像)、对应丝带的丝带训练样本集(例如,各类丝带在不同地面上以不同型态呈现的图像)、对应布头的布头训练样本集(例如,各类布头在不同地面上以不同型态呈现的图像)、对应棉絮的棉絮训练样本集(例如,各类棉絮在不同地面上以不同型态呈现的图像)、对应植物藤蔓的植物藤蔓训练样本集(例如,各类植物藤蔓在不同地面上以不同型态呈现的图像)等。至于具体的训练过程,可参见前文在针对移动机器人和清洁机器人的技术方案中有提及,在此不再赘述。
在本申请中,在进行图像识别时,将待识别的图像作为一输入输入到柔性缠绕物图像分类器中,即可由柔性缠绕物图像分类器输出相应的识别结果。在某些实施例中,利用柔性缠绕物图像分类器对图像进行识别可至少包括如下步骤:对待识别的图像进行图像预处理;对图像预处理后的所述图像进行特征提取;将待识别的图像的特征输入到柔性缠绕物图像分类器,得到识别结果。
步骤S435,确定柔性缠绕物的信息及移动机器人的位置信息。
在步骤S435中,在识别出图像中存在柔性缠绕物且所述柔性缠绕物的类别之后,还包括确定移动机器人的位置信息和所述柔性缠绕物的其他信息(例如,柔性缠绕物在当前物理空间内的位置及其尺寸信息)。
一方面,确定移动机器人的位置信息可包括:获取前、后时刻的至少两幅图像中相匹配特征的位置,并依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定移动机器人的位置及姿态。
于实际的实施方式中,预先构建图像坐标系与物理空间坐标系的对应关系。其中,所述图像坐标系是基于图像像素点而构建的图像坐标系,摄像装置13所拍摄的图像中各个图像像素点的二维坐标参数可由所述图像坐标系描述。所述图像坐标系可为直角坐标系或极坐标系等。所述物理空间坐标系及基于实际二维或三维物理空间中各位置而构建的坐标系,其物理空间位置可依据预设的图像像素单位与单位长度(或单位角度)的对应关系而被描述在所述物理空间坐标系中。所述物理空间坐标系可为二维直角坐标系、极坐标系、球坐标系、三维直角坐标系等。
为了构建所述对应关系,在某些实施方式中,移动机器人可基于前后时刻的至少两幅图像中相匹配特征的位置和自所述前一时刻至所述当前时刻所获取的移动信息,构建所述对应关系。
在移动机器人移动期间获取移动机器人的移动信息以及获取摄像装置所拍摄的各个图像。
获取移动机器人的移动信息可通过配置的移动传感装置来实现。所述移动传感装置包括但不限于:位移传感器、陀螺仪、速度传感器、测距传感器、悬崖传感器等,所能获取的移动信息包括但不限于:位移信息、角度信息、与障碍物之间的距离信息、速度信息、行进方向信息等。
对获取的各个图像中的特征进行识别和匹配并得到相匹配特征在各个图像中的图像位置。移动机器人可按照预设的时间间隔或图像数量间隔获取前一时刻和当前时刻的两幅图像,识别并匹配两幅图像中的特征。其中,根据所使用的硬件和软件处理能力的设计,所述时间间隔可在几毫秒至几百毫秒之间选择,所述图像数量间隔可在0帧至几十帧之间选择。所述特征包括但不限于:形状特征、和灰度特征等。所述形状特征包括但不限于:角点特征、直线特征、边缘特征、曲线特征等。所述灰度色特征包括但不限于:灰度跳变特征、高于或低于灰度阈值的灰度值、图像中包含预设灰度范围的区域尺寸等。
再根据所述图像中相匹配特征的位置和移动信息所提供的物理空间位置来构建所述对应关系。在此,所述初始化装置可通过构建物理空间坐标系和图像坐标系的特征坐标参数来建立所述对应关系。例如,所述初始化装置可依据所拍摄前一时刻图像所在物理空间位置为物理空间坐标系的坐标原点,并将该坐标原点与图像中相匹配的特征在图像坐标系中的位置进行对应,从而构建两个坐标系的对应关系。
在构建了所述对应关系后,所述处理装置获取当前时刻图像和前一时刻图像中相匹配的特征,并依据所述对应关系和所述特征确定机器人的位置及姿态。
为了能够准确定位,所匹配特征的数量通常为多个,例如在10个以上。为此,移动机器人根据所识别的特征在各自图像中位置,从所识别出的特征中寻找能够匹配的特征,从而可依据各个所述特征所对应的图像像素的位置变化来定位机器人的位置及姿态。其中,所述机器人的位置可依据在二维平面内的位移变化而得到,所述姿态可依据在二维平面内的角度变化而得到。
另一方面,确定柔性缠绕物的信息可包括:获取至少一幅图像,依据所述至少一幅图像中特征 的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
以清洁机器人为例,由于清洁机器人通常进行室内清洁工作,因而通过摄像装置所拍摄的图像中的实物一般会包括例如墙、桌、沙发、衣柜、电视机、电源插座、网线插座等。在本示例中,控制摄像装置在清洁机器人的导航操作环境下拍摄图像,之后,通过图像识别来识别所拍摄的图像中实物的图形。其中,所述实物的图形可以由实物的灰度、实物的轮廓等特征表征。同时,所述实物的图形并不限于实物的外部几何图形,还可包括实物上呈现出的其他图形,例如电源插座上的二孔插口、五孔插口,网线插座上的方形插口等。鉴于此,例如,对于外部几何图形相近的电源插座和网线插座,则可利用电源插座的五孔插口与网线插座的方形插口来辨别。此外,在清洁机器人的摄像装置在室内可拍摄的图像中的实物包括电源插座、网线插座的情况下,由于电源插座、网线插座是根据GB标准设计的,因而不会因其所处的环境不同而有所变化,因此,可以作为标准件。标准件的标准物理特征可包括电源插座的长、宽、高,电源插座上五孔插口的结构关系等。在某些实施方式中,标准件的图形和标准件的标准物理特征可以是预设的,并且利用机器人的存储装置预先存储。因此,获取标准件的标准物理特征的方式包括自机器人的存储装置中读取预设的标准物理特征。其中,所述标准件可包括基于行业标准、国家标准、国际标准、和自定义标准中的至少一种标准而设计的标准件。例如,行业标准如机械行业标准JB、建筑材料行业标准JC等;国家标准如中国GB标准、德国DIN标准、英国BS标准等;国际标准如国际ISO标准;自定义标准稍后详述。所述标准物理特征可包括轮廓尺寸、标准结构关系等,例如,标准件的标准物理特征包括标准件实际物理上的长、宽、高,标准件中对应标准的实际物理上的其他尺寸数据等。例如,电源插座上两孔间距。又如电源插座的长宽值。再如底板的长宽值或地砖的长宽值。还比如地毯的长宽值及厚度。
此外,针对所识别的图像中实物的图形和所存储的标准件的图形,通过对图像内容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的标准件的图形是否对应,当所识别的至少一个图形与所存储的标准件的图形对应时,获取标准件的标准物理特征。其中,与所存储的标准件的图形对应的所述至少一个图形被称为标准图形。以电源插座为例,存储有标准电源插座图形,因此,通过对图像容、特征、结构、关系、纹理及灰度等的对应关系,相似性和一致性的分析来确定所识别的至少一个图形与所存储的电源插座的图形是否对应,当所识别的至少一个图形与所存储的电源插座的图形对应时,获取电源插座的标准物理特征。
因此,基于预设的单位像素间隔与实际物理空间中单位长度的对应关系以及所识别的标准图形的尺寸和所对应的标准物理特征中的实物尺寸,移动机器人即可计算出图像中存在的柔性缠绕物在当前物理空间内的位置及其尺寸信息。以设置在墙上的插座为例,当处理装置识别出插座以及墙与 底面的交界线、或识别出插座并默认插座被安装在墙上时,按照上述对应关系,不仅可得到柔性缠绕物在当前物理空间内的位置(例如,所述柔性缠绕物与插座的距离和偏角,所述移动机器人相距插座的距离和偏角,所述移动机器人相距柔性缠绕物的距离和偏角等),还可以利用插座的标准量度(例如插座的边框的长宽值或插座中插孔的间距等)的空间位置关系得到柔性缠绕物的尺寸(例如柔性缠绕物的长度和粗细)以及柔性缠绕物所覆盖的区域。
步骤437,根据确定的柔性缠绕物的信息及移动机器人的位置信息,对移动机器人的行为进行相应的控制。在本实施例中,其中,所述移动机器人的行为可至少包括但不限于:移动机器人的移动和移动机器人的姿态。
在某些实施例中,对移动机器人的行为进行相应的控制可包括:基于柔性缠绕物的信息,控制移动机器人按照原导航路线移动并越过所述柔性缠绕物。即,若识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不会干扰到移动机器人的正常运作的情形下,则控制移动机器人按照原导航路线移动并越过所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索的直径较小且规则摆放,或者,判断图像中的柔性缠绕物为尺寸较大的布头且所述布头平铺于地面,则可控制移动机器人按照原导航路线移动并越过所述柔性缠绕物。其中,在控制移动机器人按照原导航路线移动的过程中,可采用多种实施方式。在一实施方式中,控制移动机器人按照原导航路线移动为控制移动机器人以原移动速度及原姿态按照原导航路线进行移动。在另一实施方式中,控制移动机器人按照原导航路线移动为控制移动机器人改变原移动速度并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。在又一实施方式中,控制移动机器人按照原导航路线移动为控制移动机器人改变移动速度和改变姿态并以原姿态按照原导航路线进行移动,在这里,改变原移动速度可包括增加移动速度和减小移动速度。
在某些实施例中,对移动机器人的行为进行相应的控制可包括:基于柔性缠绕物的信息,控制移动机器人修改原导航路线并越过所述柔性柔性缠绕物。具体地,若识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物在原导航路线下很可能会干扰到移动机器人的正常运作但通过改变原导航路线则可避免的情形下,控制移动机器人修改原导航路线移动并越过所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物的摆放可能干扰到移动机器人的正常运作(例如线缆、绳索或线头在长度方向上的摆放与原导航路线基本一致,或者线缆、绳索、线头、丝带等恰好位于按照原导航路线下移动机器人的行走轮或吸尘进气口下方位置),则可控制移动机器人修改原导航路线移动并越过所述柔性缠绕物,例如,修改原导航路线,使得修改后的导航路线与柔性缠绕物的摆放方向相垂直,可使得移动机器人越过柔性缠绕物,或者,修改原导航路线,使得移动机器人越过柔性缠绕物时,柔性缠绕物不会处于新的导航轮线中移动机器人的管轮或或吸尘进气口下 方位置。其中,在控制移动机器人修改原导航路线并安装修改后的导航路线移动的过程中,移动机器人移动速度可有不同的实施方式,其移动速度既可以保持不变,也可以增加移动速度或减小移动速度。
在某些实施例中,对移动机器人的行为进行相应的控制可包括:基于柔性缠绕物的信息,控制移动机器人修改原导航路线以避开所述柔性缠绕物。具体地,若识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到移动机器人的正常运作的情形下,则控制移动机器人修改原导航路线以避开所述柔性缠绕物。在一示例中,若判断图像中的柔性缠绕物为线缆、绳索、或布头且所述线缆或绳索不规则摆放,或者,判断图像中的柔性缠绕物为线缆或绳索且所述尺寸较大,或者,判断图像中的柔性缠绕物为线头或丝带,则可控制移动机器人修改原导航路线移动,避开所述柔性缠绕物。
在某些实施例中,对移动机器人的行为进行相应的控制可包括:基于柔性缠绕物的信息,控制移动机器人停止移动。具体地,若识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物很可能会干扰到移动机器人的正常运作的情形下或者无法有效判断所述柔性缠绕物对移动机器人的正常运作的干扰程度的情形下,则控制移动机器人停止移动。
在某些实施例中,对移动机器人的行为进行相应的控制可包括:基于柔性缠绕物的信息,忽略所述柔性缠绕物,则控制移动机器人按照原导航路线进行移动。具体地,若识别出柔性缠绕物,结合其类别、尺寸、和/或位置等信息,判断所述柔性缠绕物不在原导航路线上的情形下,则控制移动机器人按照原导航路线进行移动。在一示例中,若判断图像中的柔性缠绕物为线缆或绳索,所述线缆或绳索贴着墙角设置,或者,若判断图像中的柔性缠绕物为线头、丝巾或布头,但所述线头、丝巾或布头在茶几或沙发下而在移动机器人的导航路线中并不包含对茶几或沙发的清洁,则可控制移动机器人按照原导航路线移动,忽略所述柔性缠绕物。
另外,还可包括控制移动机器人发出报警信息。即,若识别出柔性缠绕物,则可即时发出发现柔性缠绕物的信息,以供后续由操作人员移除柔性缠绕物以排除障碍。
如上所述,本申请移动机器人的控制方法,可控制摄像装置进行拍摄以获取包含地面的图像,对所拍摄的至少一幅包含地面的图像进行识别,在识别出图像中存在柔性缠绕物时对移动机器人的行为进行控制。利用本申请移动机器人,可针对柔性缠绕物进行有效地检测,并可根据检测结果对移动机器人的行为进行相应的控制。
还需要说明的是,通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本申请的部分或全部可借助软件并结合必需的通用硬件平台来实现。基于这样的理解,本申请还提供一种电子设备的存储介质,所述存储介质存储有一个或多个程序,当所述一个或多个计算机程序被一个或多个处理器执行时,使得所述一个或多个处理器实现前述的任一项所述的控制方法。
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可包括其上存储有机器可执行指令的一个或多个机器可读介质,这些指令在由诸如计算机、计算机网络或其他电子设备等一个或多个机器执行时可使得该一个或多个机器根据本申请的实施例来执行操作。例如执行移动机器人的控制方法中的各步骤等。机器可读介质可包括,但不限于,软盘、光盘、CD-ROM(紧致盘-只读存储器)、磁光盘、ROM(只读存储器)、RAM(随机存取存储器)、EPROM(可擦除可编程只读存储器)、EEPROM(电可擦除可编程只读存储器)、磁卡或光卡、闪存、或适于存储机器可执行指令的其他类型的介质/机器可读介质。其中,所述存储介质可位于移动机器人也可位于第三方服务器中,如位于提供某应用商城的服务器中。在此对具体应用商城不做限制,如小米应用商城、华为应用商城、苹果应用商城等。
本申请可用于众多通用或专用的计算系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。
Claims (34)
- 一种移动机器人的控制方法,其中,所述移动机器人具有摄像装置,其特征在于,所述控制方法包括以下步骤:在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像;以及对所述摄像装置所拍摄的至少一幅包含地面的图像进行识别,在识别出所述图像中存在柔性缠绕物时对所述移动机器人的行为进行控制。
- 根据权利要求1所述的移动机器人的控制方法,其特征在于,对所述摄像装置所拍摄的至少一幅图像进行识别的步骤包括:从所述摄像装置所拍摄的图像中获取至少一幅包含地面的图像;以及利用柔性缠绕物图像分类器对所述至少一幅包含地面的图像进行识别以得到识别结果。
- 根据权利要求2所述的移动机器人的控制方法,其特征在于,所述柔性缠绕物图像分类器是经卷积神经网络训练得到的。
- 根据权利要求1所述的移动机器人的控制方法,其特征在于,还包括如下步骤:获取前、后时刻的至少两幅图像中相匹配特征的位置,依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定机器人的位置及姿态;以及获取至少一幅图像,依据所述至少一幅图像中特征的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
- 根据权利要求4所述的移动机器人的控制方法,其特征在于,所述获取前后时刻的至少两幅图像中相匹配特征的位置的方式包括跟踪至少两幅图像中包含相同特征的位置。
- 根据权利要求4所述的移动机器人的控制方法,其特征在于,所述依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息的方式包括:在所述至少一幅图像中识别出一具有已知尺寸的参照物;以及依据所述参照物的尺寸,推算出所述至少一幅图像中柔性缠绕物的尺寸信息。
- 根据权利要求4所述的移动机器人的控制方法,其特征在于,还包括获取移动机器人的移动信息的步骤。
- 根据权利要求7所述的移动机器人的控制方法,其特征在于,还包括基于两幅图像中相匹配特征的位置和自前后时刻所获取的移动信息,构建图像坐标系与物理空间坐标系的对应关系的步骤。
- 根据权利要求4所述的移动机器人的控制方法,其特征在于,对所述移动机器人的行为进行控制的步骤包括以下任一种:基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人按照原导航路线移动并越过所述柔性缠绕物;基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人修改原导航路线以越过所述柔性柔性缠绕物;以及基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人修改原导航路线以避开所述柔性缠绕物。
- 根据权利要求4所述的移动机器人的控制方法,其特征在于,对所述移动机器人的行为进行控制的步骤包括:基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人停止移动。
- 根据权利要求9或10所述的移动机器人的控制方法,其特征在于,控制所述移动机器人发出报警信息。
- 一种移动机器人,其特征在于,包括:存储装置,存储有同时定位与地图构建应用及行为控制应用;摄像装置,用于在移动机器人的工作模式下获取操作环境的图像;处理装置,与所述存储装置和所述摄像装置相连,用于在移动机器人的工作模式下,控制所述摄像装置进行拍摄以获取包含地面的图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制;以及移动系统,与所述处理装置相连,用于基于所述处理装置所发出的控制指令而驱动移动机器人移动。
- 根据权利要求12所述的移动机器人,其特征在于,所述摄像装置设于所述移动机器人的顶面或者顶面与侧面的交接处。
- 根据权利要求12所述的移动机器人,其特征在于,所述处理装置用以从所述摄像装置所拍摄的图像中获取至少一幅图像,并利用一柔性缠绕物图像分类器对所述至少一幅图像进行识别以获得识别结果。
- 根据权利要求14所述的移动机器人,其特征在于,所述柔性缠绕物图像分类器存储于所述存储装置中,或者存储于与所述移动机器人远程通信的云端系统中。
- 根据权利要求12所述的移动机器人,其特征在于,所述处理装置用以自所述存储装置中调用同时定位与地图构建应用及行为控制应用:获取前、后时刻的至少两幅图像中相匹配特征的位置,并依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定移动机器人的位置及姿态;以及获取至少一幅图像,依据所述至少一幅图像中特征的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
- 根据权利要求16所述的移动机器人,其特征在于,还包括跟踪装置,与所述摄像装置相连,用于跟踪前后时刻的至少两幅图像中包含相同特征的位置以获取前后时刻的至少两幅图像中相匹配特征的位置。
- 根据权利要求17所述的移动机器人,其特征在于,还包括移动传感装置,与所述处理装置相连,用于获取移动机器人的移动信息。
- 根据权利要求18所述的移动机器人,其特征在于,还包括初始化装置,用于基于前后时刻的至少两幅图像中相匹配特征的位置和前后时刻所获取的移动机器人的移动信息,构建图像坐标系与物理空间坐标系的对应关系。
- 根据权利要求16所述的移动机器人,其特征在于,所述处理装置调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制的方式包括以下任一种:基于柔性缠绕物的位置和尺寸信息,向所述移动系统发出控制指令以控制所述移动机器人按照原导航路线移动并越过所述柔性缠绕物;基于柔性缠绕物的位置和尺寸信息,向所述移动系统发出控制指令以控制所述移动机器人修改原导航路线并越过所述柔性柔性缠绕物;以及基于柔性缠绕物的位置和尺寸信息,向所述移动系统发出控制指令以控制所述移动机器人修改原导航路线以避开所述柔性缠绕物。
- 根据权利要求16所述的移动机器人,其特征在于,所述处理装置调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制为:基于柔性缠绕物的位置和尺寸信息,向所述移动系统发出控制指令以控制所述移动机器人停止移动。
- 根据权利要求12所述的移动机器人,其特征在于,还包括报警装置,与所述处理装置相连,用于在所述处理装置识别出所述图像中存在柔性缠绕物时发出报警信息。
- 一种移动机器人的控制系统,所述移动机器人配置有摄像装置,其特征在于,所述控制系统包括:存储装置,存储有同时定位与地图构建应用及行为控制应用;以及处理装置,与所述存储装置和所述摄像装置相连,用于在移动机器人的工作模式下, 控制所述摄像装置进行拍摄以获取包含地面的图像,并在识别出所述图像中存在柔性缠绕物时自所述存储装置中调用同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制。
- 根据权利要求23所述的移动机器人的控制系统,其特征在于,所述处理装置用以从所述摄像装置所拍摄的图像中获取至少一幅图像,并利用一柔性缠绕物图像分类器对所述至少一幅图像进行识别以获得识别结果。
- 根据权利要求24所述的移动机器人的控制系统,其特征在于,所述柔性缠绕物图像分类器存储于所述存储装置中,或者存储于与所述移动机器人远程通信的云端系统中。
- 根据权利要求23所述的移动机器人的控制系统,其特征在于,所述处理装置用以自所述存储装置中调用同时定位与地图构建应用及行为控制应用:获取前、后时刻的至少两幅图像中相匹配特征的位置,并依据图像坐标系与物理空间坐标系的对应关系和所述相匹配特征的位置确定移动机器人的位置及姿态;以及获取至少一幅图像,依据所述至少一幅图像中特征的位置确定所述至少一幅图像中柔性缠绕物的位置以及依据所述至少一幅图像中的标准量度确定所述柔性缠绕物的尺寸信息。
- 根据权利要求26所述的移动机器人的控制系统,其特征在于,还包括跟踪装置,与所述摄像装置相连,用于跟踪前后时刻的至少两幅图像中包含相同特征的位置以获取前后时刻的至少两幅图像中相匹配特征的位置。
- 根据权利要求27所述的移动机器人的控制系统,其特征在于,还包括移动传感装置,与所述处理装置相连,用于获取移动机器人的移动信息。
- 根据权利要求28所述的移动机器人的控制系统,其特征在于,还包括初始化装置,用于基于前后时刻的至少两幅图像中相匹配特征的位置和前后时刻所获取的移动机器人的移动信息,构建图像坐标系与物理空间坐标系的对应关系。
- 根据权利要求26所述的移动机器人的控制系统,其特征在于,所述处理装置同时定位与地图构建应用及行为控制应用以对所述移动机器人的行为进行控制的方式包括以下任一种:基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人按照原导航路线移动并越过所述柔性缠绕物;基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人修改原导航路线以越过所述柔性柔性缠绕物;以及基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人修改原导航路线以避开所述柔性缠绕物。
- 根据权利要求26所述的移动机器人的控制系统,其特征在于,所述处理装置调用同时定位与地图构建应用及行为控制应用对所述移动机器人的行为进行控制为基于柔性缠绕物的位置和尺寸信息,控制所述移动机器人停止移动。
- 根据权利要求23所述的移动机器人的控制系统,其特征在于,还包括报警装置,与所述处理装置相连,用于在所述处理装置调取所述程序并识别出所述图像中存在柔性缠绕物时发出报警信息。
- 一种清洁机器人,其特征在于,包括:摄像装置;如权利要求23-32中任一所述的控制系统;移动系统,与所述控制系统相连,用于基于所述控制系统所发出的控制指令而驱动移动机器人移动;清洁系统,与所述控制系统相连,用以在所述移动机器人移动时对地面执行清洁作业。
- 一种计算机可读存储介质,其特征在于,存储有至少一个程序,所述程序被处理器执行时,实现如权利要求1至11中任一项所述移动机器人的控制方法中的各个步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/144,129 US10513037B2 (en) | 2017-12-15 | 2018-09-27 | Control method and system, and mobile robot using the same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711352639.6A CN108170137A (zh) | 2017-12-15 | 2017-12-15 | 移动机器人及其控制方法和控制系统 |
CN201711352639.6 | 2017-12-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/144,129 Continuation US10513037B2 (en) | 2017-12-15 | 2018-09-27 | Control method and system, and mobile robot using the same |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019114219A1 true WO2019114219A1 (zh) | 2019-06-20 |
Family
ID=62522532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/090655 WO2019114219A1 (zh) | 2017-12-15 | 2018-06-11 | 移动机器人及其控制方法和控制系统 |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN108170137A (zh) |
WO (1) | WO2019114219A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112826361A (zh) * | 2020-12-31 | 2021-05-25 | 广州科语机器人有限公司 | 擦窗机器人喷洒控制方法及装置 |
EP4024129A4 (en) * | 2019-08-30 | 2022-09-07 | Taroko Door & Window Technologies, Inc. | SYSTEM AND METHOD FOR BUILDING MATERIAL IMAGE RECOGNITION AND ANALYSIS |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109003262B (zh) * | 2018-06-29 | 2022-06-21 | 炬大科技有限公司 | 顽固污渍清洁方法及装置 |
CN110833357A (zh) * | 2018-08-15 | 2020-02-25 | 格力电器(武汉)有限公司 | 障碍物识别方法及装置 |
JP7529379B2 (ja) * | 2018-08-22 | 2024-08-06 | 株式会社ダイヘン | 移動体 |
CN109684919B (zh) * | 2018-11-15 | 2021-08-17 | 重庆邮电大学 | 一种基于机器视觉的羽毛球发球违例判别方法 |
CN109397293B (zh) * | 2018-11-27 | 2022-05-31 | 上海机器人产业技术研究院有限公司 | 一种基于移动机器人的地面水平误差建模及补偿方法 |
CN111366937B (zh) * | 2018-12-24 | 2022-03-29 | 珠海一微半导体股份有限公司 | 基于超声波的机器人作业方法、作业装置、芯片及机器人 |
CN111358359B (zh) * | 2018-12-26 | 2021-08-24 | 珠海市一微半导体有限公司 | 机器人的避线方法、装置、芯片及扫地机器人 |
CN111358361B (zh) * | 2018-12-26 | 2021-08-24 | 珠海市一微半导体有限公司 | 扫地机器人避线的控制方法、控制装置和计算机存储介质 |
CN111358360B (zh) * | 2018-12-26 | 2021-08-24 | 珠海市一微半导体有限公司 | 机器人避免线缠绕的方法、装置和芯片及扫地机器人 |
CN109799813A (zh) * | 2018-12-27 | 2019-05-24 | 南京理工大学 | 一种多智能体小车分布式编队的实现方法 |
CN111374597B (zh) * | 2018-12-28 | 2021-08-24 | 珠海市一微半导体有限公司 | 清洁机器人避线的方法、装置、存储介质及清洁机器人 |
TWI799587B (zh) * | 2019-01-18 | 2023-04-21 | 清展科技股份有限公司 | 建材影像辨識分析系統及其方法 |
CN109886129B (zh) * | 2019-01-24 | 2020-08-11 | 北京明略软件系统有限公司 | 提示信息生成方法和装置,存储介质及电子装置 |
CN111568307B (zh) * | 2019-02-19 | 2023-02-17 | 北京奇虎科技有限公司 | 机器人执行清扫工作方法、设备及计算机可读存储介质 |
CN110200549A (zh) * | 2019-04-22 | 2019-09-06 | 深圳飞科机器人有限公司 | 清洁机器人控制方法及相关产品 |
CN110051292B (zh) * | 2019-05-29 | 2021-11-02 | 尚科宁家(中国)科技有限公司 | 一种扫地机器人控制方法 |
CN110622085A (zh) * | 2019-08-14 | 2019-12-27 | 珊口(深圳)智能科技有限公司 | 移动机器人及其控制方法和控制系统 |
CN110794831B (zh) * | 2019-10-16 | 2023-07-28 | 深圳乐动机器人股份有限公司 | 一种控制机器人工作的方法及机器人 |
CN111714034B (zh) * | 2020-05-18 | 2022-10-21 | 科沃斯机器人股份有限公司 | 一种自移动机器人的控制方法、系统及自移动机器人 |
TWI790934B (zh) * | 2022-03-03 | 2023-01-21 | 優式機器人股份有限公司 | 機器人避障方法 |
CN115857371A (zh) * | 2022-12-27 | 2023-03-28 | 广州视声智能股份有限公司 | 基于移动终端的智能家居控制方法和装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106527444A (zh) * | 2016-11-29 | 2017-03-22 | 深圳市元征科技股份有限公司 | 清洁机器人的控制方法及清洁机器人 |
CN106826749A (zh) * | 2017-01-22 | 2017-06-13 | 深圳悉罗机器人有限公司 | 移动机器人 |
US9717387B1 (en) * | 2015-02-26 | 2017-08-01 | Brain Corporation | Apparatus and methods for programming and training of robotic household appliances |
CN107137026A (zh) * | 2017-06-26 | 2017-09-08 | 深圳普思英察科技有限公司 | 一种扫地机器人及其摄像头补光系统、方法 |
CN107291080A (zh) * | 2017-06-27 | 2017-10-24 | 深圳普思英察科技有限公司 | 一种扫地机器人及避障方法、可读存储介质 |
CN107569181A (zh) * | 2016-07-04 | 2018-01-12 | 九阳股份有限公司 | 一种智能清洁机器人及清扫方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101920498A (zh) * | 2009-06-16 | 2010-12-22 | 泰怡凯电器(苏州)有限公司 | 实现室内服务机器人同时定位和地图创建的装置及机器人 |
DE102011000009A1 (de) * | 2011-01-03 | 2012-07-05 | Vorwerk & Co. Interholding Gmbh | Verfahren zur gleichzeitigen Bestimmung und Kartenbildung |
CN104407610A (zh) * | 2014-07-21 | 2015-03-11 | 东莞市万锦电子科技有限公司 | 地面清洁机器人系统及其控制方法 |
KR20170024844A (ko) * | 2015-08-26 | 2017-03-08 | 엘지전자 주식회사 | 이동 로봇 및 이의 제어방법 |
CN106737653A (zh) * | 2015-11-20 | 2017-05-31 | 哈尔滨工大天才智能科技有限公司 | 一种机器人视觉中障碍物刚柔性的判别方法 |
CN105511478B (zh) * | 2016-02-23 | 2019-11-26 | 百度在线网络技术(北京)有限公司 | 应用于扫地机器人的控制方法、扫地机器人及终端 |
-
2017
- 2017-12-15 CN CN201711352639.6A patent/CN108170137A/zh active Pending
- 2017-12-15 CN CN202011184314.3A patent/CN112506181A/zh active Pending
-
2018
- 2018-06-11 WO PCT/CN2018/090655 patent/WO2019114219A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9717387B1 (en) * | 2015-02-26 | 2017-08-01 | Brain Corporation | Apparatus and methods for programming and training of robotic household appliances |
CN107569181A (zh) * | 2016-07-04 | 2018-01-12 | 九阳股份有限公司 | 一种智能清洁机器人及清扫方法 |
CN106527444A (zh) * | 2016-11-29 | 2017-03-22 | 深圳市元征科技股份有限公司 | 清洁机器人的控制方法及清洁机器人 |
CN106826749A (zh) * | 2017-01-22 | 2017-06-13 | 深圳悉罗机器人有限公司 | 移动机器人 |
CN107137026A (zh) * | 2017-06-26 | 2017-09-08 | 深圳普思英察科技有限公司 | 一种扫地机器人及其摄像头补光系统、方法 |
CN107291080A (zh) * | 2017-06-27 | 2017-10-24 | 深圳普思英察科技有限公司 | 一种扫地机器人及避障方法、可读存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4024129A4 (en) * | 2019-08-30 | 2022-09-07 | Taroko Door & Window Technologies, Inc. | SYSTEM AND METHOD FOR BUILDING MATERIAL IMAGE RECOGNITION AND ANALYSIS |
CN112826361A (zh) * | 2020-12-31 | 2021-05-25 | 广州科语机器人有限公司 | 擦窗机器人喷洒控制方法及装置 |
CN112826361B (zh) * | 2020-12-31 | 2022-04-19 | 广州科语机器人有限公司 | 擦窗机器人喷洒控制方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112506181A (zh) | 2021-03-16 |
CN108170137A (zh) | 2018-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019114219A1 (zh) | 移动机器人及其控制方法和控制系统 | |
US11042760B2 (en) | Mobile robot, control method and control system thereof | |
US10513037B2 (en) | Control method and system, and mobile robot using the same | |
WO2019232806A1 (zh) | 导航方法、导航系统、移动控制系统及移动机器人 | |
CN109890573B (zh) | 移动机器人的控制方法、装置、移动机器人及存储介质 | |
EP3951544A1 (en) | Robot working area map constructing method and apparatus, robot, and medium | |
US10705535B2 (en) | Systems and methods for performing simultaneous localization and mapping using machine vision systems | |
JP7442063B2 (ja) | 掃除機の制御方法及び制御システム | |
US11099577B2 (en) | Localization method and system, and robot using the same | |
RU2620236C1 (ru) | Система автоматической уборки, робот-уборщик и способ управления роботом-уборщиком | |
WO2022027869A1 (zh) | 一种基于边界的机器人区域划分方法、芯片及机器人 | |
CN112867424B (zh) | 导航、划分清洁区域方法及系统、移动及清洁机器人 | |
WO2019144541A1 (zh) | 一种清洁机器人 | |
WO2019007038A1 (zh) | 扫地机器人、扫地机器人系统及其工作方法 | |
WO2019232804A1 (zh) | 软件更新方法、系统、移动机器人及服务器 | |
WO2021146862A1 (zh) | 移动设备的室内定位方法、移动设备及控制系统 | |
CN113703439A (zh) | 自主移动设备控制方法、装置、设备及可读存储介质 | |
CN112034837A (zh) | 确定移动机器人工作环境的方法、控制系统及存储介质 | |
JP7173846B2 (ja) | 掃除機の制御システム、自律走行型掃除機、掃除システム、および掃除機の制御方法 | |
CN114680740B (zh) | 清扫控制方法、装置、智能设备、移动设备及服务器 | |
WO2024140195A1 (zh) | 基于线激光的自行走设备避障方法及装置、设备和介质 | |
KR102302198B1 (ko) | 청소 장치 및 그 제어 방법 | |
CN111813103B (zh) | 移动机器人的控制方法、控制系统及存储介质 | |
US20240197130A1 (en) | Structured light module and self-moving device | |
JP2014026467A (ja) | 同定装置、同定装置の制御方法、移動体、制御プログラム、および記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18888719 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18888719 Country of ref document: EP Kind code of ref document: A1 |