CN111123278A - Partitioning method, device and storage medium - Google Patents

Partitioning method, device and storage medium Download PDF

Info

Publication number
CN111123278A
CN111123278A CN201911398734.9A CN201911398734A CN111123278A CN 111123278 A CN111123278 A CN 111123278A CN 201911398734 A CN201911398734 A CN 201911398734A CN 111123278 A CN111123278 A CN 111123278A
Authority
CN
China
Prior art keywords
ground
area
boundary
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911398734.9A
Other languages
Chinese (zh)
Other versions
CN111123278B (en
Inventor
武乾康
单俊杰
谢凯旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201911398734.9A priority Critical patent/CN111123278B/en
Publication of CN111123278A publication Critical patent/CN111123278A/en
Application granted granted Critical
Publication of CN111123278B publication Critical patent/CN111123278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The embodiment of the application provides a partitioning method, partitioning equipment and a storage medium. In the embodiment of the application, the ground three-dimensional point cloud data in the mobile equipment operation area is collected, the ground characteristic information is identified based on the ground three-dimensional point cloud data, and then the operation area is partitioned based on the ground characteristic information. The method has the advantages that the ground three-dimensional point cloud data with higher precision is used for partitioning the operation area more accurately, so that the autonomous mobile equipment can execute the operation task more flexibly and conveniently based on the partition, the range of the operation partition can be smaller and more accurate, the flexibility, the efficiency and/or the quality of the operation task can be improved, and the task execution effect can be improved.

Description

Partitioning method, device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a partitioning method, device, and storage medium.
Background
With the development of artificial intelligence technology, robots also tend to be intelligent. For example, the home service robot may draw an environment map by means of certain artificial intelligence, and automatically complete a corresponding task in a work area depending on the environment map. However, when the conventional robot executes a task in a working area, the execution effect is not ideal.
Disclosure of Invention
Aspects of the present disclosure provide a partitioning method, apparatus, and storage medium to improve the capability of self-powered partitioning of autonomous mobile devices.
The embodiment of the application provides a partitioning method, which comprises the following steps: acquiring ground three-dimensional point cloud data in an autonomous mobile equipment operation area; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
The embodiment of the present application further provides a partitioning method, which is applicable to an autonomous mobile device, and the method includes: collecting ground three-dimensional point cloud data in a working area; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
An embodiment of the present application further provides a computing device, including: one or more memories and one or more processors; one or more memories for storing computer programs; one or more processors are coupled with the one or more memories for executing the computer programs for: acquiring ground three-dimensional point cloud data in an autonomous mobile equipment operation area; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by one or more processors, causes the one or more processors to implement the steps in the above method of embodiments.
An embodiment of the present application further provides an autonomous mobile device, including: the device comprises a device body, wherein one or more processors and one or more memories for storing computer programs are arranged on the device body; one or more processors coupled with the one or more memories for executing the computer programs for: collecting ground three-dimensional point cloud data in a working area; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
In the embodiment of the application, the ground three-dimensional point cloud data collected from the operation area of the mobile equipment is facilitated, the ground characteristic information is identified based on the ground three-dimensional point cloud data, and then the operation area is partitioned based on the ground characteristic information, wherein the ground three-dimensional point cloud data is high in precision, and the operation area is partitioned more accurately, so that the autonomous mobile equipment can execute operation tasks more flexibly and conveniently based on partitions, the range of the operation partitions can be smaller and more accurate, the flexibility, the efficiency and/or the quality of the execution of the operation tasks are improved, and the task execution effect is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1a is a schematic diagram illustrating a structure of a structured light module according to an exemplary embodiment of the present disclosure;
FIG. 1b is a schematic diagram illustrating an installation state and an application state of the structured light module on the autonomous mobile device according to the embodiment shown in FIG. 1 a;
FIG. 2a is a schematic diagram of another structured light module provided in an exemplary implementation of the present application;
FIG. 2b is a schematic diagram illustrating a structure of another structured light module provided in an exemplary embodiment of the present application;
FIG. 2c is a schematic structural diagram of another structured light module according to an exemplary embodiment of the present disclosure;
FIGS. 3 a-3 e are front, bottom, top, rear, and exploded views, respectively, of the structured light module provided in the embodiment of FIG. 2 b;
FIG. 4 is a schematic diagram illustrating another structure of the structured light module according to the embodiment shown in FIG. 2 b;
fig. 5a is a schematic flowchart of a partitioning method according to an exemplary embodiment of the present application;
fig. 5b is a schematic structural diagram of a system in which an autonomous mobile device is located according to an exemplary embodiment of the present application;
FIG. 6a is a schematic flow chart of another partitioning method provided in an exemplary embodiment of the present application;
6b-6d are schematic diagrams of several ground textures provided in exemplary embodiments of the present application;
FIG. 7a is a schematic illustration of a pattern of several ground boundaries provided by an exemplary implementation of the present application;
FIG. 7b is a diagram of different candidate regions with boundary features according to an exemplary embodiment of the present application;
FIG. 8a is a diagram illustrating the results of an initial partition in a bedroom, as provided in an exemplary embodiment of the present application;
fig. 8b is a schematic structural diagram of the initial partitioning result shown in fig. 8a after being modified by using the partitioning method provided in the embodiment of the present application;
FIG. 8c is a schematic diagram of the result of the initial partition, which is provided by the exemplary embodiment of the present application and is exemplified by a living room and a balcony;
fig. 8d is a schematic structural diagram of the initial partitioning result shown in fig. 8c corrected by using the partitioning method provided in the embodiment of the present application;
FIG. 9a is a diagram illustrating a family layout and initial zoning results provided by an exemplary embodiment of the present application;
fig. 9b is a schematic structural diagram of the initial partitioning result shown in fig. 9a corrected by using the partitioning method provided in the embodiment of the present application;
FIG. 10 is a flowchart illustrating a further partitioning method provided in an exemplary embodiment of the present application;
FIG. 11 is a schematic structural diagram of a computing device according to an exemplary embodiment of the present application;
fig. 12 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing autonomous mobile device is basically provided with an LDS or a visual sensor and used for collecting surrounding environment information, constructing an environment map and partitioning a working area based on the environment map, so that the autonomous mobile device can execute a working task based on the partitioning, and the flexibility, the efficiency and/or the quality of executing the working task are relatively high. In the prior art, in the process of constructing an environment map, the boundary of a working area is identified, and the working area is partitioned according to the identified boundary. In view of the problem of sensor precision, the boundary of the identified work area in the prior art is not accurate enough, so that the accuracy of the partition is not enough, and the effect of executing the work task based on the partition is not ideal.
In view of the above, in some embodiments of the present application, ground three-dimensional point cloud data within an autonomous mobile device work area is collected; and identifying ground characteristic information based on the ground three-dimensional point cloud data, and partitioning the operation area based on the ground characteristic information. The collected ground three-dimensional point cloud data has higher precision, and the operation area can be partitioned more accurately based on the ground three-dimensional point cloud data with higher precision, so that the autonomous mobile equipment can execute the operation task more flexibly and conveniently based on the partition, the range of the operation partition can be smaller and more accurate, the flexibility, the efficiency and/or the quality of the operation task can be improved, and the task execution effect can be improved.
It should be noted that the various methods provided in the embodiments of the present application may be implemented by the autonomous mobile device, or may be executed by another computing device different from the autonomous mobile device, where the another computing device may be a server device such as a conventional server, a cloud server, and a server array provided by an autonomous mobile device provider, or another computing device different from the autonomous mobile device and the server device thereof, for example, a terminal device such as a personal notebook, a tablet computer, and the like, or a server device such as a conventional server, a cloud server, and a server array provided by a third party. For convenience of description, the server device provided by the third party may be referred to as the third party server device. The following embodiments of the present application will respectively describe the partition methods executed by different execution subjects.
In the embodiment of the present application, the autonomous moving apparatus may be any mechanical apparatus capable of performing a highly autonomous spatial movement in its environment, and for example, may be a robot, a cleaner, an unmanned vehicle, or the like. The robot may include a sweeping robot, a accompanying robot, a guiding robot, or the like. The explanation of the "autonomous mobile device" herein is applicable to all embodiments of the present application, and will not be repeated in the following embodiments.
In the embodiment of the present application, the ground three-dimensional point cloud data in the autonomous mobile device operation area refers to a set of ground point data obtained by collecting information on the ground in the operation area by using a sensor. The ground point data is three-dimensional data, and the accuracy is higher. The operation area refers to an environment area where the autonomous mobile device executes an operation task, and the implementation form of the autonomous mobile device and the operation area where the autonomous mobile device is located are different according to different application scenes. Taking the sweeping robot as an example, the working area can be the whole home environment or one or more areas in a kitchen, a living room, a toilet, a balcony, a bedroom and the like in the home environment; alternatively, the work area may be a place or a partial area in a place such as a mall, a supermarket, a warehouse, a subway, a train station, or the like.
In the embodiment of the present application, the type of the sensor used for collecting the ground three-dimensional point cloud data is not limited, and any sensor type that can collect information on the ground in the operation area and can collect three-dimensional information is suitable for the embodiment of the present application. For example, sensors that may be employed in embodiments of the present application include, but are not limited to: three-dimensional vision sensors, line laser sensors, and area array laser sensors (which may be referred to simply as area laser sensors).
Before describing the various methods provided in the embodiments of the present application in detail, an area array laser sensor and a line laser sensor that can be used by an autonomous mobile device are described.
Area array laser sensor:the area array laser sensor mainly comprises a laser emitting array and an information acquisition module; the laser emitting array mainly emits surface laser outwards, and the information acquisition module can acquire environmental images and also can receive reflected light returned by the surface laser on an object. The information acquisition module can comprise components such as a camera.
The working principle of the area array laser sensor is as follows: the laser emission array emits surface laser outwards through the optical imaging system in front of the laser emission array, and after the emitted surface laser reaches the surface of an object, a part of the emitted surface laser is reflected back and forms pixel points on an image through the optical imaging system in front of the information acquisition module. Because the distances from the surface of the object to the return point are different, the flight Time (TOF) of reflected light is different, and independent distance information can be obtained by each pixel point through measuring the flight time of the reflected light, and the detection range can reach more than one hundred meters. In addition, the information acquisition module of the area array laser sensor can also acquire images of the surrounding environment, so that the rapid 3D imaging of the resolution at the megapixel level is realized, and the imaging frequency is more than 30 frames per second.
The environmental information acquired by the area array laser sensor not only contains direction and distance information, but also adds reflectivity information of the surface of an object, and is assisted with a deep learning technology under a three-dimensional scene, so that the cognitive ability of environmental elements can be realized. When the number of laser lines is large and dense, the data formed by the reflectivity information can be regarded as texture information, environmental features with matching and identification values can be obtained from the texture information, the environment identification capability is strong, and advantages brought by a visual algorithm and the texture information can be enjoyed to a certain extent. Therefore, the area array laser sensor well combines the advantages of a line laser sensor and a vision sensor, is beneficial to improving the space comprehension of the autonomous mobile equipment to the environment, is also beneficial to qualitatively improving the obstacle recognition performance of the autonomous mobile equipment, and even enables the space comprehension capability of the autonomous mobile equipment to the environment to reach the level of human eyes; in addition, compared with a sensing scheme based on an image sensor, the area array laser sensor can provide more accurate distance and direction information, the complexity of sensing operation can be reduced, and the real-time performance is improved.
Of course, in addition to the above advantages, the planar array laser sensor has the following advantages:
1) the area array laser sensor has the advantages of solid stating, low cost and miniaturization;
2) when the area array laser sensor is installed and used, a rotating part is not needed, the structure and the size of the sensor can be greatly compressed, the service life is prolonged, and the cost is reduced;
3) the visual angle of the area array laser sensor can be adjusted, and the area array laser sensor can be adapted to different autonomous mobile devices, so that the scanning speed and the scanning precision can be accelerated;
4) the area array laser sensor can simultaneously collect environmental information in the horizontal direction and the vertical direction, can build a 3D map, and is beneficial to improving the accuracy of functions such as positioning, navigation planning and the like based on the map.
It is worth mentioning that the autonomous mobile device can be controlled to realize various functions based on environment perception based on the environment information which is acquired by the area array laser sensor and contains three dimensions of direction, distance and reflectivity. For example, ground three-dimensional point cloud data in a working area can be collected, and a data base and support are provided for partitioning the working area.
In this embodiment, the specific implementation form of the area array laser sensor is not limited, and examples thereof include, but are not limited to: solid-state area array laser radar. Taking an area array solid state laser radar as an example, the autonomous mobile device can be provided with the area array solid state laser radar, the area array solid state laser radar is used for collecting ground three-dimensional point cloud data in an operation area, then ground characteristic information in the operation area is identified based on the ground three-dimensional point cloud data, and the operation area is partitioned according to the ground characteristic information. Or after the autonomous mobile device acquires the ground three-dimensional point cloud data in the working area of the autonomous mobile device by using the area array solid state laser radar, reporting the ground three-dimensional point cloud data to a computing device (such as a server device of an autonomous mobile device provider, or a third-party server device or other computing devices) independent of the autonomous mobile device, identifying ground characteristic information in the working area by the computing device based on the ground three-dimensional point cloud data, and partitioning the working area according to the ground characteristic information. Furthermore, the computing device can also return the partition result to the autonomous mobile device, so that the autonomous mobile device can execute the job task more flexibly and conveniently based on the partition, the flexibility, the efficiency and/or the quality of executing the job task are improved, and the task execution effect is improved.
Line laser sensor:also known as structured light modules. The structured light module that this application embodiment used refers to any module structure that contains line laser emitter and camera module extensively. In the structured light module, a line laser transmitter is used for transmitting line laser outwards; the camera module can gather the environment image, also can receive the line laser and hit the reflected light that returns on the object. Wherein, the line laser that line laser emitter launched is located the field of view scope of camera module, and line laser can help surveying information such as the profile, height and/or width of the object in the camera module field of view angle, and the camera module can gather the environment image that is detected by line laser.
Similar to the working principle of the area array laser sensor, the working principle of the structured light module is as follows: the line laser emission module emits line laser outwards, and after the emitted line laser reaches the surface of an object, a part of the emitted line laser is reflected back and forms pixel points on an image through an optical imaging system of the camera module. Because the distances from the surface of the object to the return point are different, the flight Time (TOF) of reflected light is different, and independent distance information and direction information, which are collectively called position information, can be obtained by each pixel point through measurement of the flight time of the reflected light.
The field angle of the camera module comprises a vertical field angle and a horizontal field angle. In this embodiment, the field angle of the camera module is not limited, and the camera module with a suitable field angle may be selected according to the application requirement. The line laser emitted by the line laser emitter is located in the field of view of the camera module, and the angle between a laser line segment formed on the surface of the object and the horizontal plane is not limited, for example, the line laser can be parallel to or perpendicular to the horizontal plane, and any angle can be formed between the line laser and the horizontal plane, and the line laser can be specifically determined according to application requirements.
In the embodiment of the present application, the implementation form of the line laser emitter is not limited, and may be any device/product form capable of emitting line laser. For example, the line laser transmitter may be, but is not limited to: and (3) a laser tube. The same is true. The realization form of the camera module is not limited. All visual equipment capable of acquiring environment images are suitable for the embodiment of the application. For example, a camera module may include, but is not limited to: a monocular camera, a binocular camera, etc.
In the embodiment of the present application, the wavelength of the line laser emitted by the line laser emitter is not limited, and the color of the line laser may be different, for example, red laser, violet laser, etc. Correspondingly, the camera module can adopt the camera module that can gather the line laser that line laser emitter launches. The camera module can also be an infrared camera, an ultraviolet camera, a starlight camera, a high-definition camera, etc., for example, adapted to the wavelength of the line laser emitted by the line laser emitter.
In the embodiment of the present application, the number of the line laser emitters is not limited, and may be, for example, one, or two or more. Similarly, the number of the camera modules is not limited, and may be, for example, one, or two or more. Of course, in the embodiment of the present application, the installation position, the installation angle, and the like of the line laser emitter, and the installation position relationship between the line laser emitter and the camera module, and the like, are not limited.
The following briefly describes the structures and working principles of several structured light modules that may be used in the embodiments of the present application with reference to fig. 1a to 4. It should be understood by those skilled in the art that the following list of structured light modules is merely illustrative and that the structured light modules that can be used in the embodiments of the present application are not limited to these examples.
As shown in fig. 1a, a structured light module 100 mainly includes: a line laser transmitter 101 and a camera module 102. Alternatively, the line laser transmitter 101 may be installed above, below, on the left side or on the right side of the camera module 102, as long as the line laser transmitted by the line laser transmitter 101 is located in the field of view of the camera module 102. In fig. 1a, a line laser transmitter 101 is shown as an example mounted above a camera module 102. As shown in fig. 1b, in the structured light module 100, a laser line segment formed by a laser surface emitted by the line laser emitter 101 hitting on an obstacle or a ground surface is horizontal to the ground and vertical to the advancing direction of the autonomous mobile device in front. This type of mounting may be referred to as horizontal mounting. Fig. 1b is a schematic diagram illustrating an installation state and an application state of the structured light module 100 on the autonomous mobile device.
As shown in fig. 1b, during the forward movement of the autonomous mobile apparatus, the structured light module 100 may be controlled to operate according to a certain manner, for example, periodically (every 20ms) to perform an environmental detection, so as to obtain a set of image data, where each image data includes a laser line segment formed by the line laser hitting the surface of the object or the ground, and a laser line segment includes a plurality of three-dimensional data, and the three-dimensional data on the laser line segments in a large number of environmental images may form three-dimensional point cloud data.
Further optionally, as shown in fig. 1a, the structured light module 100 may further include a main control unit 103, and the main control unit 103 may control the line laser transmitter 101 and the camera module 102 to operate. Optionally, the main control unit 103 controls exposure of the camera module 102 on one hand, and on the other hand, can control the line laser emitter 101 to emit line laser to the outside during exposure of the camera module 102, so that the camera module 102 collects an environment image detected by the line laser. In fig. 1a, the master control unit 103 is represented by a dashed box, which illustrates that the master control unit 103 is an optional unit.
As shown in fig. 2a, another structured light module 200a mainly includes: the camera module 201a and the line laser transmitters 202a distributed on two sides of the camera module 201 a. The structured light module 200a provided by the embodiment can be applied to an autonomous mobile device, the autonomous mobile device comprises a main controller, the main controller is respectively electrically connected with the camera module 201a and the line laser emitter 202a, and the camera module 201a and the line laser emitter 202a can be controlled to work.
Optionally, the main controller controls exposure of the camera module 201a on the one hand, and controls the line laser emitter 202a to emit line laser during exposure of the camera module 201a on the other hand, so that the camera module 201a collects an environment image detected by the line laser. The main controller may control the line laser transmitters 202a located at two sides of the camera module 201a to work simultaneously or alternatively, which is not limited herein.
As shown in fig. 2b, another structured light module 200b mainly includes: camera module 201b, the line laser transmitter 202b who distributes in camera module 201b both sides, and main control unit 203 b. The main control unit 203b is electrically connected with the camera module 201b and the line laser transmitter 202b, and can control the camera module 201b and the line laser transmitter 202b to work. The line laser transmitter 202b transmits line laser outwards under the control of the main control unit 203 b; the camera module 201b is used for collecting an environment image detected by the line laser under the control of the main control unit 203 b.
Optionally, the main control unit 203b performs exposure control on the one hand on the camera module 201b, and on the other hand, the control line laser emitter 202b emits line laser to the outside during the exposure of the camera module 201b, so that the camera module 201b collects an environment image detected by the line laser. The main control unit 203b may control the line laser transmitters 202b located at two sides of the camera module 201b to work simultaneously or alternatively, which is not limited herein. The main control unit 203b is further configured to provide the environment image to the autonomous mobile device, and in particular to the autonomous mobile device main controller, when the structured light module 200b is applied to the autonomous mobile device.
As shown in fig. 2c, another structured light module 200c mainly includes: the camera module 201c, the line laser transmitters 202c distributed on two sides of the camera module 201c, and the first control unit 203c and the second control unit 204 c. The first control unit 203c is electrically connected with the line laser emitter 202c, the second control unit 204c and the camera module 201c respectively; the camera module 201c is also electrically connected to the second control unit 204 c.
The second control unit 204c performs exposure control on the camera module 201c, and a synchronization signal generated by each exposure of the camera module 201c is output to the first control unit 203 c. The first control unit 203c controls the laser emitters 202c to alternately operate according to the synchronization signal and provides a laser source distinguishing signal to the second control unit 204 c; the second control unit 204c marks the environment image acquired by each exposure of the camera module 201c left and right according to the laser source distinguishing signal. The second control unit 204c is further configured to provide the marked environment image to the autonomous mobile device, and in particular to the autonomous mobile device main controller, in case the structured light module 200c is applied to the autonomous mobile device.
In the structured light module shown in fig. 2a to 2c, the total number of line laser emitters is not limited, and may be two or more, for example. The number of the line laser emitters distributed on each side of the camera module is not limited, and the number of the line laser emitters on each side of the camera module can be one or more; in addition, the number of the line laser emitters on the two sides can be the same or different. In fig. 2a to 2c, the camera module is illustrated by providing one line laser emitter on each side of the camera module, but the invention is not limited thereto. For another example, 2, 3 or 5 line laser emitters are arranged on the left side and the right side of the camera module.
In the structured light module shown in fig. 2a to 2c, the distribution of the line laser emitters on both sides of the camera module is not limited, and may be, for example, uniform distribution, non-uniform distribution, symmetrical distribution, or non-symmetrical distribution. Wherein, evenly distributed and inhomogeneous distribution can mean that it can be evenly distributed or inhomogeneous distribution to distribute between the line laser emitter of camera module with one side, of course, also can understand: the line laser transmitters distributed on the two sides of the camera module are uniformly distributed or non-uniformly distributed on the whole. For symmetric distribution and asymmetric distribution, the line laser emitters distributed on two sides of the camera module are symmetrically or asymmetrically distributed as seen from the whole. Symmetry here includes both the number of equivalents and the mounting location. For example, in the structured light module shown in fig. 2a to 2c, the number of the line laser emitters is two, and the two line laser emitters are symmetrically distributed on two sides of the camera module.
In the structured light module shown in fig. 2a to 2c, the installation position relationship between the line laser emitter and the camera module is not limited, and all the installation position relationships that the line laser emitter is distributed on two sides of the camera module are applicable to the embodiment of the present application. Wherein, the mounted position relation between line laser emitter and the camera module is relevant with the applied scene of structured light module. The installation position relation between the line laser transmitter and the camera module can be flexibly determined according to the application scene of the structured light module. The installation position relationship here includes the following aspects:
installation height: on the mounting height, line laser emitter and camera module can be located different heights. For example, the line laser emitters on the two sides are higher than the camera module, or the camera module is higher than the line laser emitters on the two sides; or the line laser transmitter on one side is higher than the camera module, and the line laser transmitter on the other side is lower than the camera module. Of course, the line laser transmitter and the camera module may be located at the same height. Preferably, the line laser transmitter and the camera module may be located at the same height. For example, in actual use, the structured light module may be mounted on a device (e.g., an autonomous mobile device such as a robot, a purifier, an unmanned vehicle, etc.), in which case the line laser emitter and the camera module are located at the same distance from a work surface (e.g., a floor) on which the device is located, e.g., 47mm, 50mm, 10cm, 30cm, or 50cm, etc.
Installation distance: the mounting distance is the mechanical distance (alternatively referred to as the baseline distance) between the line laser transmitter and the camera module. The mechanical distance between the line laser transmitter and the camera module can be flexibly set according to the application requirement of the structured light module. The size of the measurement blind area can be determined to a certain extent by information such as a mechanical distance between the line laser transmitter and the camera module, a detection distance required to be met by equipment (such as a robot) where the structured light module is located, the diameter of the equipment and the like. For the equipment (such as a robot) where the structural optical module is located, the diameter of the structural optical module is fixed, and the mechanical distance between the measurement range and the line laser transmitter and the camera module can be flexibly set according to requirements, which means that the mechanical distance and the blind area range are not fixed values. On the premise of ensuring the measurement range (or performance) of the equipment, the range of the blind area should be reduced as much as possible, however, the larger the mechanical distance between the line laser transmitter and the camera module is, the larger the controllable distance range is, which is beneficial to better control of the size of the blind area.
In some application scenarios, the structured light module is applied to a sweeping robot, and may be mounted on a striking plate or a robot body of the sweeping robot, for example. For the sweeping robot, a reasonable mechanical distance range between the line laser emitter and the camera module is given as an example. For example, the mechanical distance between the line laser transmitter and the camera module may be greater than 20 mm. Further optionally, the mechanical distance between the line laser transmitter and the camera module is greater than 30 mm. Furthermore, the mechanical distance between the line laser transmitter and the camera module is larger than 41 mm. It should be noted that the mechanical distance range given here is not only applicable to the application of the structured light module to the sweeping robot, but also applicable to the application of the structured light module to other devices with the specification and size closer to or similar to that of the sweeping robot.
Emission angle: the emission angle refers to an included angle between a central line of the line laser emitted by the line laser emitter and an installation base line of the line laser emitter after the line laser emitter is installed. The installation baseline refers to a straight line where the line laser module and the camera module are located under the condition that the line laser module and the camera module are located at the same installation height. In the present embodiment, the emission angle of the line laser transmitter is not limited. The emission angle is related to the detection distance required by the equipment (such as a robot) where the structured light module is located, the radius of the equipment, and the mechanical distance between the line laser emitter and the camera module. Under the condition that the detection distance required to be met by equipment (such as a robot) where the structured light module is located, the radius of the equipment and the mechanical distance between the line laser transmitter and the camera module are determined, the transmitting angle of the line laser transmitter can be directly obtained through a trigonometric function relation, namely the transmitting angle is a fixed value.
Of course, if a specific emitting angle is required, the emitting angle can be adjusted by adjusting the detecting distance required to be satisfied by the device (such as a robot) where the structured light module is located and the mechanical distance between the line laser emitter and the camera module. In some application scenarios, in the case that the detection distance and the radius of the device (e.g. robot) where the structured light module is located need to satisfy are determined, the emission angle of the line laser emitter can be changed within a certain angle range by adjusting the mechanical distance between the line laser emitter and the camera module, for example, can be 50-60 degrees, but is not limited thereto.
In order to facilitate the use, the structured light module provided by the embodiment of the application further comprises some bearing structures for bearing the camera module and the line laser emitter besides the camera module and the line laser emitter. The bearing structure may have various implementations, which are not limited thereto. In some optional embodiments, the bearing structure includes a fixing base, and further may include a fixing cover used in cooperation with the fixing base. Taking the structured light module 200b shown in fig. 2b as an example, the structure of the structured light module with the fixing base and the fixing cover will be described with reference to fig. 3 a-3 e. Fig. 3a to 3e are a front view, a bottom view, a top view, a rear view and an exploded view of the structured light module 200b, wherein each view does not show all the components due to the view angle, so only some of the components are labeled in fig. 3a to 3 e. As shown in fig. 3 a-3 e, the structured light module 200b further includes: a fixed seat 204 b. The camera module and the line laser transmitter are assembled on the fixing base 204 b.
Further optionally, as shown in fig. 3e, the fixing seat 204b includes: a main body 205b and end portions 206b located on both sides of the main body 205 b; wherein the camera module is assembled on the main body portion 205b, and the line laser transmitter is assembled on the end portion 206 b; the end face of the end portion 206b faces the reference surface, so that the center line of the line laser emitter and the center line of the camera module intersect at one point; the reference plane is a plane perpendicular to the end surface or the tangent to the end surface of the main body portion 205 b.
In an alternative embodiment, in order to facilitate fixing and reduce the influence of the device on the appearance of the structural optical module, as shown in fig. 3e, a groove 208b is formed in the middle of the main body 205b, and the camera module is installed in the groove 208 b; the end portion 206b is provided with a mounting hole 209b, and the line laser transmitter is mounted in the mounting hole 209 b. Further optionally, as shown in fig. 3e, the structured light module 200b is further equipped with a fixing cover 207b above the fixing base 204 b; a cavity is formed between the fixing cover 207b and the fixing base 204b to accommodate a connecting line of the camera module and the line laser transmitter. The fixing cover 207b and the fixing base 204b can be fixed by a fixing member. In fig. 3e, the fixing member is illustrated by taking the screw 210b as an example, but the fixing member is not limited to the screw implementation.
In an optional embodiment, the lens of the camera module is located inside the outer edge of the groove 208b, i.e. the lens is retracted inside the groove 208b, so that the lens can be prevented from being scratched or knocked, and the protection of the lens is facilitated.
In the embodiment of the present application, the shape of the end surface of the main body 205b is not limited, and may be, for example, a flat surface, or a curved surface recessed inward or outward. The shape of the end surface of the main body portion 205b varies depending on the device in which the structured light module is installed. For example, assuming that the structural light module is applied to an autonomous mobile device whose outline is circular or elliptical, the end surface of the main body portion 205b may be implemented as an inwardly recessed curved surface that is adapted to the outline of the autonomous mobile device. If the configuration optical module is applied to an autonomous mobile device having a square or rectangular outline, the end surface of the main body 205b may be implemented as a plane that is adapted to the outline of the autonomous mobile device. The autonomous mobile equipment with the circular or oval outline can be a sweeping robot, a window cleaning robot and the like with the circular or oval outline. Accordingly, the autonomous moving apparatus having a square or rectangular outer contour may be a sweeping robot, a window cleaning robot, or the like having a square or rectangular outer contour.
In an alternative embodiment, for an autonomous mobile device with a circular or elliptical outline, the structured light module is mounted on the autonomous mobile device, and in order to match the appearance of the autonomous mobile device more and maximize the utilization of the space of the autonomous mobile device, the radius of the curved surface of the main body 205b is the same as or approximately the same as the radius of the autonomous mobile device. For example, if the outline of the autonomous moving apparatus is circular and the radius range is 170mm, when the structured light module is applied to the autonomous moving apparatus, the radius of the curved surface of the main body portion may be 170mm or approximately 170mm, for example, may be in the range of 170mm to 172mm, but is not limited thereto.
Further, in the case that the structured light module is applied to an autonomous mobile device with a circular or elliptical outline, the emission angle of the line laser emitter in the structured light module is mainly determined by the detection distance required by the autonomous mobile device, the radius of the autonomous mobile device, and the like. Under this scene, the terminal surface or the terminal surface tangent line of the main part of structured light module are parallel with the installation baseline, therefore the emission angle of line laser emitter also can be defined as: the included angle between the central line of the line laser emitted by the line laser emitter and the end surface or the tangent of the end surface of the main body part. In some application scenarios, the range of emission angles of the line laser transmitter may be implemented as 50-60 degrees with the detection range and radius determination of the autonomous mobile device, but is not limited thereto.
The structured light module that above-mentioned embodiment of this application provided, stable in structure, size are little, agree with the complete machine outward appearance, have greatly saved the space, can support multiple type autonomic mobile device.
Further, the structured light module shown in fig. 1a and fig. 2a to 2c may further include a laser driving circuit. The laser driving circuit is electrically connected with the line laser transmitter and is mainly used for amplifying a control signal sent to the line laser transmitter. In the structured light module shown in fig. 2a to 2c, the number of laser driving circuits is not limited. Different laser transmitters can share one laser driving circuit, and one line laser transmitter can correspond to one laser driving circuit. Preferably, one line laser transmitter corresponds to one laser driving circuit. In fig. 4, a structured light module 200b is taken as an example for illustration, and one line laser emitter 202b corresponds to one laser driving circuit 211b in the structured light module 200 b. In fig. 2c, the laser driving circuit 211b is mainly used for amplifying the control signal sent by the main control unit 203b to the line laser transmitter 202b, and providing the amplified control signal to the line laser transmitter 202b to control the line laser transmitter 202 b. In the embodiment of the present application, the circuit structure of the laser driving circuit 211b is not limited, and any circuit structure that can amplify a signal and provide the amplified signal to the line laser transmitter 202b is suitable for the embodiment of the present application.
In some embodiments of the present application, the autonomous mobile device may install a structured light module, collect ground three-dimensional point cloud data in an operation area of the autonomous mobile device using the structured light module, identify ground feature information in the operation area based on the ground three-dimensional point cloud data, and partition the operation area according to the ground feature information. Or after the autonomous mobile device acquires the ground three-dimensional point cloud data in the working area of the autonomous mobile device by using the structured light module, the ground three-dimensional point cloud data is reported to a computing device (such as a server device of an autonomous mobile device provider, or a third-party server device or other computing devices) independent of the autonomous mobile device, the computing device identifies ground feature information in the working area based on the ground three-dimensional point cloud data, and the working area is partitioned according to the ground feature information. Furthermore, the computing device can also return the partition result to the autonomous mobile device, so that the autonomous mobile device can execute the job task more flexibly and conveniently based on the partition, the flexibility, the efficiency and/or the quality of executing the job task are improved, and the task execution effect is improved.
It should be noted that, no matter the area array laser sensor or the structured light module, after acquiring information of the ground in the operation area of the autonomous mobile device to obtain an environment image, the environment image may be subjected to various processing to extract ground three-dimensional point cloud data therefrom. Processing here includes, but is not limited to: blurring processing of other image areas except the image area corresponding to the surface laser or the line laser, calculating position information of pixel points in the image area corresponding to the surface laser or the line laser, converting the position information of the pixel points from a sensor coordinate system to a coordinate system where the autonomous mobile equipment is located and/or a world coordinate system, and the like. In the embodiment of the present application, the coordinate system used by the ground three-dimensional point cloud data is not limited, and may be a coordinate system in which the autonomous mobile device is located or a world coordinate system.
The above-described area array laser sensor and several structured light modules are suitable for use in the method embodiments described below. In addition to area array laser sensors and structured light modules, other three-dimensional sensors, such as three-dimensional vision sensors, are also suitable for use in the embodiments described below in this application. The methods provided by the embodiments of the present application will be described in detail below with reference to fig. 5a to 10.
Fig. 5a is a flowchart illustrating a partitioning method according to an exemplary embodiment of the present application. As shown in fig. 5a, the partition method includes:
501. and acquiring ground three-dimensional point cloud data in the autonomous mobile equipment operation area.
502. And identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data.
503. And partitioning the operation area according to the ground characteristic information.
The execution subject of this embodiment may be a server device 51 provided by an autonomous mobile device provider, or may also be a server device provided by a third party, which is referred to as a third party server device 52 for short, or may also be another computing device 53.
As shown in fig. 5b, autonomous mobile device 50 is communicatively coupled to server device 51, third party server device 52, or other computing device 53. Wherein, the autonomous mobile device 50 and the server device 51, the third party server device 52 or other computing devices 53 can be connected wirelessly or by wire. Optionally, the autonomous mobile device 50 may be communicatively connected with the server device 51, the third party server device 52 or other computing device 53 through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, or a new network format to be developed in the future. Optionally, the autonomous mobile device 50 may also be communicatively coupled to the server device 51, the third party server device 52, or other computing devices 53 via bluetooth, WiFi, infrared, zigbee, or NFC.
The autonomous mobile device 50 is provided with a structured light module, a surface laser sensor, a three-dimensional vision sensor, or other sensors capable of acquiring ground three-dimensional point cloud data, which are collectively referred to as a three-dimensional sensor. The implementation structure and the operation principle of various types of three-dimensional sensors can be referred to the description of the foregoing embodiments, and are not repeated herein. The autonomous mobile device 50 collects ground three-dimensional point cloud data within the work area using a three-dimensional sensor.
Alternatively, taking the three-dimensional sensor as an example of a structured light module, a process of the autonomous moving apparatus 50 acquiring ground three-dimensional point cloud data in its operation area by using the three-dimensional sensor will be described. Specifically, in the process of moving the autonomous moving apparatus 50, on one hand, the line laser emitter in the structured light module is controlled to emit line laser to the outside, and the line laser can strike the ground in the front area and be reflected back after encountering the ground; and on the other hand, the camera module in the control structure light module collects the environmental image in the front area. During this period, if line laser detects the ground in the region in front, can form the laser line section on ground, this laser line section can be gathered by the camera module, in other words, can contain the laser line section that forms after meeting ground by the line laser that line laser emitter sent out in the environment image that the camera module gathered. The laser line segment comprises a plurality of pixel points, each pixel point corresponds to ground point data on the ground, and the ground point data is three-dimensional data, namely the coordinates of the ground points on an x axis, a y axis and a z axis. Further, pixel points on the laser line segments in a large number of environment images can form ground three-dimensional point cloud data. The coordinate system used by the ground point data may be a world coordinate system, or may be a coordinate system in which the autonomous mobile device is located, which is not limited to this. For example, if a world coordinate system is adopted, the autonomous moving apparatus 50 may convert the coordinates of the pixel points in the environment image collected by the structured light module into the world coordinate system according to the coordinate system of the structured light module, the coordinate system of the autonomous moving apparatus, and the transformation relationship between the world coordinate system, so as to obtain the ground three-dimensional point cloud data in the world coordinate system.
After the autonomous moving apparatus 50 acquires ground three-dimensional point cloud data in the working area by using a three-dimensional sensor (e.g., a structured light module), the ground three-dimensional point cloud data may be uploaded to a server apparatus 51, a third party server apparatus 52, or another computing apparatus 53; the server device 51, the third party server device 52 or other computing devices 53 receive the ground three-dimensional point cloud data uploaded by the autonomous mobile device 50, and partition the work area of the autonomous mobile device 50 according to the ground three-dimensional point cloud data.
Specifically, the server device 51, the third party server device 52 or the other computing device 53 identifies the ground feature information in the working area of the autonomous moving device 50 according to the ground three-dimensional point cloud data; the work area of the autonomous mobile device 50 is then partitioned according to the ground characteristic information. In the embodiment of the present application, the ground feature information is information that assists in partitioning the work area and that can reflect the ground features in the work area. In short, the ground characteristic information can distinguish different partitions to a certain extent.
In an alternative embodiment, as shown in fig. 5b, after partitioning the work area, the server device 51, the third party server device 52, or the other computing device 53 may return the result of partitioning the work area to the autonomous mobile device 50; the autonomous mobile device 50 receives the partition results returned by the server device 51, the third party server device 52, or other computing device 53 and stores them locally. Further, the autonomous mobile device 50 can flexibly and conveniently execute job tasks on a partition basis based on the partition result of the locally saved job region.
For example, the autonomous mobile device 50 may autonomously perform a job task by partition upon receiving a job instruction. Alternatively, the server device 51, the third-party server device 52, or the other computing device 53 may also return the partition result of the job area to the terminal device bound to the autonomous mobile device 50, where the terminal device has an APP or client software for controlling the autonomous mobile device 50, and the user may check the partition result of the job area through the APP or client software, and then send a job instruction for some partition or partitions to the autonomous mobile device 50 based on the partition result, so as to instruct the autonomous mobile device 50 to execute a job task on the corresponding partition.
No matter which kind of operation mode, in this embodiment, with the help of the higher advantage of ground three-dimensional point cloud data precision, be favorable to carrying out the subregion to the operation region more accurately for autonomic mobile device can be more nimble, make things convenient for ground based on the subregion to carry out the operation task, and the scope of operation subregion can be littleer, more accurate, is favorable to improving flexibility, efficiency and/or quality when carrying out the operation task, improves task execution effect.
In the embodiment of the present application, the ground three-dimensional point cloud data refers to any three-dimensional point cloud data related to the ground in the working area, which can be acquired by using a three-dimensional sensor, and includes data of ground texture, ground topography, ground boundary, boundary contour, obstacles on the ground and the like. In view of the abundance of ground three-dimensional point cloud data, the ground feature information in the work area that can be identified based on the ground three-dimensional point cloud data is also diversified. In the embodiment of the present application, the implementation form of the ground feature information that can be identified based on the ground three-dimensional point cloud data is not limited, and all the ground feature information having a certain degree of distinction for the operation partition is suitable for the embodiment of the present application. The ground characteristic information is different according to different scenes of the operation area.
For example, in some application scenarios, different floor materials are used in different areas, for example, some areas use cement mortar, some areas use marble, some areas use terrazzo, some areas use ceramic tiles, some areas use wood floors, some areas use plastic materials, and some areas use carpets. The ground material is different, and the representation area is different. Based on this, the ground material category can be used as a kind of ground feature information, but is not limited thereto.
For another example, in some application scenarios, structures or components with a splitting function may be disposed between different partitions, and these structures or components may form some ground boundaries at the boundaries of adjacent regions. For example, between some areas, doors are provided, such as sliding doors, side hung doors, folding doors, roller doors, etc., by means of which different areas are divided, one area inside the door and another area outside the door. These doors generally include a door body and a guide structure disposed on the ground to fit the door body; the guiding structure will be different for different types of doors. If a sliding door or a folding door is used, a push-pull strip (an example of a guide structure) matched with the sliding door or the folding door needs to be arranged on the ground; if a roller shutter door is used, a roller shutter door bottom beam (an example of a guide structure) matched with the roller shutter door bottom beam needs to be arranged on the ground; if a side hung door is used, a door sill (an example of a guide structure) adapted thereto needs to be provided on the ground, and so on. For another example, there may be steps between some regions, one below the steps and the other above the steps, with some difference in height between the two regions. These guiding structures or steps on the ground can be used as boundary lines on the ground, belong to ground features and can separate different areas. In view of this, the ground boundary such as a guide structure or a step on the ground may be used as the ground characteristic information, but is not limited thereto.
In other application scenarios, ground material, doors, steps, etc. may be used simultaneously to segment different regions. In these application scenarios, the ground feature information may include both: the ground material category and the ground boundary line, which is formed by the guide structure of the door on the ground, the step, and the like.
With reference to the foregoing example, another partitioning method provided in the exemplary embodiment of the present application is shown in fig. 6a, and includes the following steps:
601. and acquiring ground three-dimensional point cloud data in the autonomous mobile equipment operation area.
602. And identifying the ground material category and/or the ground boundary in the operation area based on the ground three-dimensional point cloud data.
603. The work area is partitioned based on the ground material category and/or the ground boundary.
In this embodiment, the working area of the autonomous mobile apparatus includes a plurality of sub-areas, and the ground characteristics of different sub-areas are different, or the ground material categories of the sub-areas are different, or there is a ground boundary between the sub-areas. Based on this, after the server device 51, the third party server device 52 or the other computing device 53 obtains the ground three-dimensional point cloud data acquired by the autonomous mobile device 50 using the structured light module, the information such as the ground material category and/or the ground boundary in the working area may be analyzed based on the ground three-dimensional point cloud data; then, the work area of the autonomous moving apparatus is divided into different areas based on information such as the ground material type and/or the ground boundary.
In an alternative embodiment, the server device 51, the third party server device 52 or the other computing device 53 identifies the ground material category in the work area based on the ground three-dimensional point cloud data; and partitioning the operation area of the autonomous mobile equipment according to the ground material type. In this embodiment, the regions with the same ground material type and adjacent regions may be divided into the same sub-region, and the regions with different ground material types and adjacent regions may be divided into different sub-regions.
In yet another alternative embodiment, the server device 51, third party server device 52 or other computing device 53 identifies a ground boundary within the work area based on the ground three-dimensional point cloud data; and partitioning the working area of the autonomous moving equipment according to the ground boundary. In this embodiment, the areas on both sides of the ground dividing line may be divided into different zones.
In yet another alternative embodiment, the server device 51, the third party server device 52 or other computing device 53 identifies the ground material category and the ground boundary within the work area based on the ground three-dimensional point cloud data; and partitioning the operation area of the autonomous mobile equipment according to the ground material type and the ground boundary. In this embodiment, regions having the same ground material type, being adjacent and having no ground boundary may be divided into the same partition, and regions having different ground material types and being adjacent or adjacent regions having a ground boundary may be divided into different partitions.
Further, in view of the characteristics of different ground materials, different ground materials may be adopted in different areas according to different requirements, and the ground textures formed by different ground materials may be different. Taking a home environment as an example, the floor structure generally comprises several areas such as a bedroom, a living room, a kitchen, a bathroom, a balcony and the like, and the floor materials adopted in the areas are different. For example, bedrooms are usually based on wooden floors, and most of the floors in areas such as living rooms, kitchens, toilets and balconies are ceramic tiles. As shown in FIG. 6b, the texture of the floor of wooden floor in bedroom, FIG. 6c the texture of ceramic floor in living room and balcony, and FIG. 6d the texture of ceramic floor in toilet and kitchen. The texture of the ground is different for different ground materials as shown in fig. 6b-6 d. Therefore, the ground texture features and the ground material classes have a certain corresponding relation. Based on this, in step 602, one embodiment of identifying the ground material category includes: calculating ground texture features in the operation area according to the ground three-dimensional point cloud data; further, the ground texture type in the work area is identified based on the ground texture features in the work area. Further, in step 603, the work area may be divided according to the ground material type within the work area.
In an optional embodiment, to facilitate calculation of ground texture features within the work area, the ground three-dimensional point cloud data may be divided into a plurality of subsets; each subset corresponds to a portion of the ground area within the work area and includes a plurality of ground point data. The ground point data is ground point data within the subset corresponding ground area. The ground point data here is three-dimensional data including position coordinates of each ground point on three coordinate axes of x, y, and z. The three coordinate axes x, y, and z may be coordinate axes in a world coordinate system, or coordinate axes in a coordinate system in which the autonomous mobile device is located. Further, texture features of the ground area corresponding to each of the plurality of subsets may be calculated according to the ground point data in the plurality of subsets.
Further, it is considered that the ground three-dimensional point cloud data in the operation area is acquired by traversing the operation area by the autonomous mobile device in a certain time period by using the three-dimensional sensor. In order to divide the ground three-dimensional point cloud data into a plurality of subsets, the ground three-dimensional point cloud data can be divided into a plurality of subsets according to the acquisition time of the ground three-dimensional point cloud data, each subset comprises a plurality of ground point data acquired by a three-dimensional sensor (such as a structured light module) on the autonomous mobile device in the same time period, and texture features of ground areas corresponding to the plurality of subsets are calculated according to the ground point data in the plurality of subsets. For example, in the process of the autonomous mobile device performing operation, an average 20ms is used as an acquisition time node, and ground point data acquired in 20ms is used as a subset; and then, identifying and analyzing the ground point data in each subset to obtain texture features in the ground area corresponding to each subset.
It should be noted that, in addition to dividing the ground three-dimensional point cloud data into a plurality of subsets according to the acquisition time, the ground point data in the same orientation may also be divided into one subset according to the approximate orientation in which the autonomous mobile device is located during the process of acquiring the ground three-dimensional point cloud data. The embodiment of the application does not limit the dividing mode of dividing the ground three-dimensional point cloud data into a plurality of subsets. It should be noted that the ground point data included in each subset belongs to the same region.
In the embodiment of the application, the ground texture feature refers to a feature formed by patterns or lines on the ground in the working area, and is a linear texture feature presented on the ground. The corresponding ground flatness of different ground materials is different, the depths and widths of the formed ravines are different, and the ravines are oriented differently when the floor is used in different sub-areas. Taking the three ground textures shown in fig. 6b-6d as an example, the flatness of the tile ground is better than that of the wood floor, the depths and widths of the ravines of the wood floor and the tile ground are different, and the orientations of the ravines of the tile ground and the wood floor are different. In the embodiments of the present application, the content included in the ground texture feature is not limited, and may include, but is not limited to, at least one of the following: ground flatness, depth of the ravines on the ground, width of the ravines, and direction of the ravines. The process of computing the texture features of their corresponding ground areas is the same or similar for the plurality of subsets. In the following embodiments, the process of calculating texture features is described by taking any one of the plurality of subsets as an example. For ease of description and distinction, this subset is referred to as the first subset. Based on this, according to the ground point data in the first subset, calculating the texture features of the ground region corresponding to the first subset, including: and calculating at least one texture feature of the ground flatness, the ravine depth, the ravine width and the ravine orientation of the ground region corresponding to the first subset according to the ground point data in the first subset.
Continuing with the three ground textures shown in fig. 6b-6d as an example, the wood floor texture shown in fig. 6b appears in the bedroom more; FIG. 6c shows the texture of the tile floor with the gullies oriented in the same direction as the room, as is common in living rooms, balconies, etc., with the tiles oriented in the same direction as the width and length of the room floor; fig. 6d shows the texture of a tile floor with gullies oriented at an angle to the room, as is common in toilets and kitchens, where the tiles are oriented at an angle to the width and length of the room floor. Therefore, different regions in the operation area can be distinguished according to information such as ground flatness, gully depth, gully width, gully orientation and the like of the ground texture. The following is an exemplary description of the calculation of several ground texture features:
ground flatness:for the ground flatness of the ground area corresponding to the first subset, the ground flatness can be calculated according to the variance of the ground point data in the first subset. For example, the variance of the ground point data in the first subset may be directly used as ground flatness for the ground area corresponding to the first subsetAnd (4) degree. The larger the variance is, the worse the flatness of the ground area is; the smaller the variance, the better the flatness of the land area. For example, the variance of the ground point data in the first subset may be used as a basis to calculate various numerical values of the variance, and the result of the numerical value calculation may reflect the ground flatness of the ground area corresponding to the first subset.
In an alternative embodiment, it is considered that the three-dimensional sensor (e.g., the structured light module) may acquire various data within the visual field, for example, if there are some obstacles with a certain height on the ground, which are located in the visual field of the three-dimensional sensor (e.g., the structured light module), the information of the obstacles may also be acquired by the three-dimensional sensor (e.g., the structured light module). The obstacle information has no meaning for calculating the ground texture features, but can bring interference to the calculation result and reduce the accuracy of the calculation result. Based on this, before calculating the ground flatness, the ground point data contained in the first subset can be filtered, and obvious unreasonable ground point data can be filtered; for example, ground point data that is not within a set value range may be filtered out; and then, calculating the variance of the residual ground point data after filtering, and determining the ground flatness of the ground area corresponding to the first subset according to the variance. The numerical range can be flexibly set according to the application scene. For example, if the numerical range is (-0.5cm, +0.5cm), the ground point data with the height outside the range of ± 0.5cm may be filtered out, and finally, the remaining ground point data is used as the measurement data, the variance of the remaining point cloud data is calculated, and the result is used as the ground flatness information.
Depth of gully:the gully depth of the ground area corresponding to the first subset can be calculated and obtained according to coordinates of ground point data in the first subset. Assuming that, in the world coordinate system, the plane formed by the x-axis and the y-axis is on the ground, and the z-axis is perpendicular to the ground and faces upward, calculating the ravine depths of the ground areas corresponding to the first subset, including: determining ground point data which are positioned on the same straight line in the ground area corresponding to the first subset according to the x-axis coordinate and the y-axis coordinate of the ground point data in the first subset; the "same line" here may be the x-axis orA line parallel to the x-axis, which may be the y-axis or a line parallel to the y-axis; then, whether ravines exist in the ground area corresponding to the first subset and the depth of the ravines when the ravines exist are determined according to the z-axis coordinate of the ground point data located on the same straight line. If the z-axis coordinate of the ground point data located on the same straight line is less than 0, this means that the straight line is a gully; if the z-axis coordinate of the ground point data located on the same straight line is 0, this means that the straight line is not a gully, and is a plane; if the z-axis coordinate of the ground point data located on the same straight line is greater than 0, this means that the straight line is a projection on the ground. In this embodiment, the three-axis coordinates of the ground point data are coordinates in the world coordinate system.
Width of gully:the ravine widths of the ground areas corresponding to the first subset may be calculated from coordinates of ground point data in the first subset. Assuming that, in the world coordinate system, the plane formed by the x-axis and the y-axis is on the ground, and the z-axis is vertically above the ground, calculating a ravine width of the ground area corresponding to the first subset, including any one of: calculating the difference of x-axis coordinates of ground point data on adjacent ravines by taking the x-axis as a reference, and taking the difference as the width of the adjacent ravines in the x-axis direction; and calculating the difference of the y-axis coordinates of the data of the ground points on adjacent ravines by taking the y-axis as a reference, and taking the difference as the width of the adjacent ravines in the y-axis direction. If there is a ravine in the ground area corresponding to the first subset, which is parallel to the y-axis, the x-axis may be used as a reference to calculate a difference between x-axis coordinates of ground point data on two adjacent ravines, which are parallel to the y-axis, and the difference is a width between the two ravines. If there is a ravine parallel to the x-axis in the ground area corresponding to the first subset, the difference between the y-axis coordinates of the ground point data on two ravines parallel to and adjacent to the x-axis can be calculated based on the y-axis, which is the width between the two ravines.
Gully orientation:for the direction of the ravines in the ground area corresponding to the first subset, calculating an included angle between each ravine in the ground area corresponding to the first subset and a reference axis by taking any coordinate axis as the reference axis; the orientation of each ravine is determined based on the included angle and the orientation of the reference axis.
Taking the x-axis as a reference axis, the method for calculating the depth of the ground ravines in the above embodiment can know that each ravine has a plurality of ground point data, and according to the x-axis coordinate and the y-axis coordinate of the ground point data, an included angle between the ravine and the x-axis can be calculated, and according to the included angle, the direction of the ravine can be obtained by combining the direction of the x-axis. If the angle between one ravine and the x-axis is 40 degrees, the direction of the ravine is determined to be 40 degrees north east if the x-axis is in the direction east and the y-axis is in the direction north; if the x-axis is 40 ° north and the y-axis is 40 ° north, the actual orientation of the ravines can be determined to be 80 ° north or 10 ° north. Similarly, the method for calculating the direction of the ground ravine by using the y axis as the reference axis and the z axis as the reference axis may refer to the calculation method by using the x axis as the reference axis, and will not be described repeatedly.
In any case, after obtaining the ground texture features such as ground flatness, gully depth, gully width, gully orientation, etc. of the ground areas corresponding to the plurality of subsets, the ground material category included in the operation area can be determined according to the texture features of the ground areas corresponding to the subsets. If the texture characteristics of the ground areas corresponding to the two adjacent subsets are the same, it is described to a great extent that the ground materials of the ground areas corresponding to the two subsets may be the same; if the ground texture characteristics of the ground areas corresponding to the two adjacent subsets are different, it is described that the ground materials of the ground areas corresponding to the two subsets may be different to a great extent. Here, "two adjacent subsets" means that the two subsets are adjacent or contiguous with respect to the ground area.
In an optional embodiment, the texture features of the ground areas corresponding to the plurality of subsets may be directly compared to obtain the ground material categories existing in the working area. In another optional embodiment, the plurality of subsets may be clustered according to texture features of the ground areas corresponding to the plurality of subsets, respectively, to obtain at least one clustering result; the subsets with the same or similar texture features are clustered together, which means that the texture features corresponding to different clustering results are different or not similar, that is, each clustering result represents a category of the ground material.
It should be noted that, in the embodiment of the present application, only several types of ground material categories and ground areas corresponding to the ground material categories need to be identified, and it is not necessary to exactly identify what material each type of ground material is. For example, it is possible to recognize that the working area includes three types of ground materials, P1, P2 and P3, P1 appears in the a region, P2 appears in the b region, and P3 appears in the c region, within the working area, without determining whether the three types of ground materials, P1, P2 and P3, are wood floor, tile or plastic, respectively. Of course, the technical scheme of identifying what material each floor material type is also applicable to the embodiment of the present application.
For example, ground texture features can be divided into the following two categories: the flatness of the ground is relatively low and the aspect ratio of gullies is large; the ground flatness is high and the aspect ratio of the ravines is not large. The two types of ground texture features correspond to different ground materials. Furthermore, the ground flatness is relatively low, and the length and width of the gully are large, so that the ground material corresponding to the ground texture feature can be preliminarily judged to possibly belong to the wood floor; the ground texture feature may be determined to be a tile, based on the ground texture feature. In addition, the ground surface in the same tile area has obvious differences in texture features such as gully orientations in different partitions, so that the tile material can be further divided into tile materials in different scenes or under different purposes by combining the features such as gully orientations. The operation area is partitioned according to the more detailed environment information of the ground material, and the partition precision and accuracy can be improved.
Further optionally, after the ground material category is identified, the ground material category may be marked at a corresponding position in the environment map corresponding to the working area, so as to provide a condition for subsequently partitioning the working area based on the ground material category. The autonomous mobile device may further include other sensors such as an LDS (laser direct structuring) or a visual sensor, the sensors may also collect ambient environment information during the traveling process of the autonomous mobile device, and an environment map corresponding to the working area may be constructed according to the environment information collected by the sensors. The environment map may be constructed in advance, or may be constructed in real time during the sub-partition process, which is not limited to this.
In the embodiment of the application, based on the ground three-dimensional point cloud data, the ground material category in the operation area can be identified, and the ground boundary line in the operation area can also be identified. The ground boundary is generally a terrain having a relatively pronounced undulating form on the ground. Taking a home environment as an example, the home environment generally comprises areas such as a main bed, a secondary bed, a living room, a kitchen, a toilet, a balcony and the like, and different rooms can be divided by doors, steps and the like. As shown in fig. 7a, there may be a threshold stone, a sliding door rail, or a step having a height difference between the a room and the B room. The rooms a and B may be any two adjacent areas in a home environment, such as a living room and kitchen, a bedroom and living room, a bathroom and living room, a living room and balcony, etc. These threshold stones, sliding door tracks or steps with height differences belong to terrains with more pronounced relief patterns on the ground, which are also characterised more clearly and generally become the boundaries between the areas. Accordingly, one embodiment of identifying a ground boundary includes: calculating the ground terrain features in the operation area according to the ground three-dimensional point cloud data; and further, the ground boundary in the operation area can be identified according to the ground topographic characteristics. The ground topographic features are different from the ground textural features, and the ground topographic features refer to unique signs and marks of the ground undulating form. For example, bumps, depressions, etc. that are more pronounced on the ground are characteristic of the terrain of the ground.
In some special scenarios, terrain with large relief patterns on the ground may be formed by specific obstacles that do not have the function of boundaries. In a home environment, for example, a wooden stick is placed in the living room, which stick produces a topographical feature similar to the ground surface of the sill stone, but the stick is isolated and the sill stone is usually attached to a wall. Accordingly, in the present embodiment, the ground surface boundary in the work area is recognized by combining the ground surface topographic features and the boundary of the work area, and the recognition accuracy of the ground surface boundary can be improved. Specifically, the identification process includes: firstly, identifying a candidate area with boundary line characteristics in an operation area according to the ground terrain characteristics; then, judging whether the boundaries of the candidate area and the operation area are connected or not; if the candidate area is connected with the boundary of the operation area, determining the candidate area as a ground boundary; otherwise, the candidate region is determined not to belong to the ground boundary. Wherein the boundary line feature may be extracted in advance and set in the method execution main body. For example, in a home environment, the characteristics of the three boundaries shown in fig. 7a may be pre-extracted and pre-installed into the server device 51, the third party server device 52, and the other computing devices 53 shown in fig. 5 b. Where the boundaries in the work area may be identified based on environmental information collected by other sensors on the autonomous mobile device, such as LDS or vision sensors, for example, where walls, wardrobes, cabinets, etc. are located, the boundaries in the home environment may be formed.
Taking the home environment as an example, as shown in fig. 7b, two thick lines represent two candidate areas s1 and s2 having boundary line characteristics identified in the room a, and the surrounding thin lines represent the boundary of the room a, i.e., the wall surface. Wherein the candidate regions s1 and s2 both have the characteristics of the threshold stone. Further, the two candidate regions s1 and s2 are identified in connection with the boundary of room a, and it is found that: candidate area s1 is isolated and not connected to any of the boundaries of room a, candidate area s2 is connected to the boundaries of room a, and it is determined that candidate area s2 may be a threshold stone and candidate area s1 may be an obstacle placed in room a.
Further alternatively, after the ground boundary within the work area is identified, the identified ground boundary may be marked at a position corresponding to the candidate area in the environment map corresponding to the work area, providing a condition for partitioning based on the ground boundary.
In various embodiments of the present disclosure, after identifying the ground material type and/or the ground boundary within the work area, the work area may be partitioned according to the ground material type and/or the ground boundary. In the present embodiment, the embodiment of partitioning the work area according to the type of the ground material and/or the ground boundary is not limited.
Alternatively, an operation of partitioning the work area may be associated with the environment map corresponding to the work area. For example, an environment map corresponding to the work area may be acquired, where the environment map includes a plurality of initial partitions divided according to boundaries of the work area; and correcting the plurality of initial subareas according to the ground material type and/or the ground boundary to obtain a target subarea included in the working area.
The environment map may be constructed according to environment information in the work area, may be constructed in real time in a partitioning process, or may be constructed in advance. In addition, the environment information used for constructing the environment map can be acquired by an LDS or a visual sensor on the autonomous mobile equipment, can also be acquired by a three-dimensional sensor (such as a structured light module) for acquiring ground three-dimensional point cloud data, and can also be combined with the environment information acquired by various sensors to construct the environment map corresponding to the operation area. In the process of constructing the environment map, the boundary of the work area may be obtained and marked in the environment map.
Taking a home environment as an example, as shown in fig. 8a, in a bedroom, during cleaning of the bedroom, the autonomous mobile device may collect environmental information in the bedroom by using the LDS, the visual sensor and/or the structured light module carried by the autonomous mobile device, construct an environmental map corresponding to the bedroom according to the collected environmental information, and mark boundaries formed by walls, a bed and a bedside table, which are not marked in fig. 8a for simplicity of illustration. The environment map may be built by the autonomous mobile device itself, or may be built by the server device 51, the third-party server device 52, and the other computing devices 53, which is not limited thereto.
Further, as shown in fig. 8a, during or after the construction of the environment map, the bedroom may be partitioned according to a conventional partitioning method, so as to obtain partitions C1, C2 and C3. Among them, the conventional partitioning method is a method of partitioning a bedroom according to a boundary in the bedroom. The partition C1 is divided by the boundary formed by one side of the bed and the adjacent wall surface, the partition C2 is divided by the boundary formed by the other side of the bed and the adjacent wall surface, and the partition C3 is divided by the boundary formed by the tail of the bed and the surrounding wall surfaces. However, the bedroom is an area that is divided into three sections in fig. 8a, and thus the initial sections contained in the visible environment map may not be accurate.
In this embodiment, when the bedroom needs to be partitioned, the autonomous mobile device collects ground three-dimensional point cloud data in the bedroom by using the structured light module and uploads the ground three-dimensional point cloud data to the server device 51, the third-party server device 52 or the other computing device 53. The server device 51, the third party server device 52 or other computing devices 53 may identify the ground material type and/or the ground boundary in the bedroom according to the ground three-dimensional point cloud data; and then correcting the initial subareas in the environment map according to the ground material classes and/or the ground boundary lines to obtain subarea results with higher accuracy. In this embodiment, the server device 51, the third party server device 52 or the other computing device 53 may recognize that the ground material category in the bedroom is the same category, and no ground boundary exists in the bedroom, and the boundary of the push-pull bar is recognized at the sliding door of the bedroom, so that it can be determined that the bedroom is a partition, and the bedroom and the area outside the sliding door belong to different partitions. Based on this, partitions C1, C2 and C3 can be merged to obtain the result of partitioning as shown in fig. 8b, i.e. the bedroom belongs to one partition D1.
In some application scenariosThe plurality of initial subareas can be corrected according to the ground material type to obtain the target subareas included in the working area. The modifying the plurality of initial partitions according to the ground material category mainly means merging the plurality of initial partitions according to the ground material category.
Further, the combining the initial partitions according to the ground material category to obtain the target partitions included in the working area specifically includes: comparing whether the ground material classes of any two adjacent initial subareas are the same or not; if the ground material classes of the two initial partitions are the same and the two initial partitions have connectivity, that is, the two initial partitions are not completely separated by a boundary and/or a ground boundary, merging the two initial partitions into one partition; after the merging process is performed on all the adjacent initial partitions, the target partition included in the work area can be obtained. As shown in fig. 8a and 8b, the three partitions C1, C2 and C3 in fig. 8a have the same ground material category and have connectivity with each other, and thus are merged into one partition D1.
In other application scenariosThe target partition included in the work area may be obtained by correcting the plurality of initial partitions based on the ground boundary alone. The correction of the plurality of initial partitions according to the ground boundary mainly means that the plurality of initial partitions are split according to the ground boundary.
Further, splitting the initial partition within the work area based on the ground boundary includes: and determining a to-be-processed partition containing a ground boundary in the plurality of initial partitions, and dividing the to-be-processed partition into at least two partitions by taking the ground boundary contained in the to-be-processed partition as a boundary to obtain a target partition contained in the working area.
As shown in fig. 8c, when the living room and the balcony are partitioned in a conventional manner, since the boundary (wall surface) between the living room and the balcony is not very apparent, the living room and the balcony are divided into an initial partition, i.e., a partition E1 in fig. 8 c. In this embodiment, the floor boundary between the living room and the balcony, i.e., the sliding door track, can be identified; the initial partition E1 is then split into two partitions F1 and F2 according to the sliding door track, as shown in fig. 8 d.
In still other application scenariosThe plurality of initial subareas may be corrected simultaneously according to the ground material type and the ground boundary to obtain the target subarea included in the work area. One embodiment of modifying the plurality of initial partitions according to the ground material category and the ground boundary includes: firstly, combining a plurality of initial partitions according to the ground material category to obtain combined partitions; then the merged subareas are split according to the ground boundary,and obtaining a target partition contained in the working area. Another embodiment of modifying the plurality of initial partitions based on the ground material category and the ground boundary includes: firstly, splitting a plurality of initial partitions according to a ground boundary to obtain split partitions; and merging the split subareas according to the ground material category to obtain a target subarea included in the operation area. For the description of the splitting and merging operations, reference may be made to the foregoing embodiments, which are not described herein again.
In the embodiment of the application, the autonomous mobile equipment collects ground three-dimensional point cloud data through the structured light module, the operation area is partitioned based on the collected ground three-dimensional point cloud data, the operation area can be partitioned more accurately by means of the advantage of higher line laser detection precision, the partition result precision is higher, the autonomous mobile equipment can be more flexible and can conveniently execute operation tasks based on partitions, the range of the operation partitions can be smaller and more accurate, the improvement of the flexibility, the efficiency and/or the quality of the execution of the operation tasks is facilitated, and the task execution effect is improved.
The following describes the technical solution of the present application in detail with reference to scenario embodiment 1, taking an autonomous mobile device as a home service robot as an example, and taking a structured light module installed on the home service robot as an example.
Scenario example 1:
in a real life scenario, the home service robot mainly works in a home environment, as shown in fig. 9a, which is a relatively common household diagram in real life, and a working area of the home service robot may be a main lying area, a sub lying area, a living room, a kitchen area, a toilet area, a balcony area, and the like shown in fig. 9 a. In fig. 9a, the different zones are connected by means of a door, for example a sliding door or a sliding door. Normally, the lower side of the sliding door is provided with a threshold stone, the lower side of the sliding door is provided with a track, and the threshold stone and the track obviously separate different areas. In general, the floor material of different areas is different, for example, the bedroom is mainly made of wood floor, and the living room, the kitchen, the bathroom and the balcony are mostly made of marble tiles.
To meet home service requirements, it is necessary to partition the home environment. Based on the partition, the user can designate the home service robot to perform a job to a designated area without performing the job for the entire home environment. For example, if the home service robot is a sweeping robot, the user may instruct the sweeping robot to sweep the kitchen or to sweep the living room. And aiming at different areas, the sweeping robot can adopt different sweeping modes, so that the cleaning quality and efficiency are ensured, and the user experience can be improved.
In this scenario embodiment, an environment map corresponding to the home environment may be established. For example, the home service robot may collect environment information in the home environment by means of the LDS or the visual sensor when performing a task in the home environment for the first time, and build an environment map according to the environment information, and may also perform a preliminary partitioning on the environment map in a conventional manner, the preliminary partitioning result being shown in fig. 9a and including partitions G1-G8, but this partitioning result is not accurate. The construction of the environment map and the initial partitioning are merely exemplary and not limiting.
In the embodiment of the scene, the home service robot is provided with the structured light module, the structured light module can be used for acquiring ground three-dimensional point cloud data of the operation area and uploading the ground three-dimensional point cloud data to a server corresponding to the home service robot. In the home environment shown in fig. 9a, the floor materials of the areas such as bedroom, living room, kitchen, bathroom, balcony, etc. are different, and the areas such as bedroom, living room, kitchen, bathroom, balcony, etc. are divided by sliding doors or sliding doors, and the guiding structures of the doors on the floor form boundary lines on the floor. Based on the method, the server can identify the ground material type and the ground boundary according to the ground three-dimensional point cloud data, and further perform auxiliary partitioning on the home environment by combining the ground material type and the ground boundary. For example, the initial partition in the environment map shown in fig. 9a may be modified according to the ground texture type and the ground boundary.
Specifically, the server may identify ground texture features such as flatness, gully depth, gully width, gully orientation, and ground boundaries of the ground in the home environment according to the ground three-dimensional point cloud data, and may distinguish region ranges having different ground material categories according to the ground texture features. For example, it can be recognized that the sub-bed and the main bed have the same floor material, and that the living room, the balcony, the kitchen, the toilet, and the like have the same floor material. Further, the server may identify a ground boundary in the home environment, such as a threshold stone of a sliding door, a sliding door track, etc., from the ground three-dimensional point cloud data.
The initial partition shown in fig. 9a may be merged based on the ground material class. The original subarea G7 and the original subarea G4 are completely separated by a ground boundary formed by a wall surface and a sliding door, so that the merging process is not needed. Similarly, other regions belonging to the complete isolation, such as the initial partitions G8, G6, G7 and G5, can be identified as being completely isolated from each other, so that the merging process is not required. For the initial partitions G1-G3, which have connectivity to each other and the same ground material category, the three initial partitions can be merged into one partition H1, as shown in fig. 9 b. In addition, the initial partitions G4 and G5 also have connectivity and the ground material is the same, and the two initial partitions may be combined into one partition H2, as shown in fig. 9 b.
Further, the initial partitions shown in fig. 9a may also be merged based on the floor dividing line in fig. 9a, where the living room and the balcony belong to the same partition G5. In connection with the floor dividing line, the living room and the balcony can be split into two partitions H2 and H3, as shown in fig. 9 b. The corrected partitioning result is shown in fig. 9b, and partitions H1, H2, H3, G6, G7, and G8 are the final target partitions, so that the accuracy of the partitioning result is higher.
In addition, the initial partition may be corrected according to the ground material type, the ground boundary, or both the ground material type and the ground boundary, depending on the actual situation. For detailed implementation of specific modifications, reference may be made to the foregoing embodiments, and details are not described in this scenario embodiment.
Fig. 10 is a flowchart illustrating another partitioning method according to an exemplary embodiment of the present application. The partitioning method is implemented by an autonomous mobile device, as shown in fig. 10, and includes:
1001. and collecting ground three-dimensional point cloud data in the operation area.
1002. And identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data.
1003. And partitioning the operation area according to the ground characteristic information.
In the embodiment of the application, a three-dimensional sensor such as a structured light module, an area array laser sensor or a three-dimensional vision sensor is installed on the autonomous mobile device. The autonomous mobile device can acquire ground three-dimensional point cloud data in a working area by using a three-dimensional sensor (such as a structured light module). Further, the autonomous mobile equipment identifies ground characteristic information in the operation area based on the collected ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
In an optional embodiment, the identifying the ground feature information in the operation area based on the ground three-dimensional point cloud data comprises: and identifying the ground material category and/or the ground boundary in the operation area based on the ground three-dimensional point cloud data. Correspondingly, according to the ground characteristic information, the operation area is partitioned, and the method comprises the following steps: the work area is partitioned based on the ground material category and/or the ground boundary.
Optionally, identifying the ground material category in the working area based on the ground three-dimensional point cloud data includes: calculating ground texture features in the operation area according to the ground three-dimensional point cloud data; and identifying the ground material type in the operation area according to the ground texture characteristics in the operation area.
Further, the calculating the ground texture features in the operation area according to the ground three-dimensional point cloud data includes: dividing the ground three-dimensional point cloud data into a plurality of subsets according to the acquisition time of the ground three-dimensional point cloud data; each subset comprises a plurality of ground point data collected by a three-dimensional sensor (such as a structured light module) in the same time period; and calculating the texture characteristics of the ground areas corresponding to the subsets according to the ground point data in the subsets.
Further optionally, calculating texture features of the ground area corresponding to each of the plurality of subsets according to the ground point data in the plurality of subsets includes: for the first subset, calculating at least one texture feature of the ground flatness, the ravine depth, the ravine width and the ravine orientation of the ground region corresponding to the first subset according to the ground point data in the first subset; wherein the first subset is any one of a plurality of subsets.
Further optionally, calculating the ground flatness of the ground area corresponding to the first subset according to the ground point data in the first subset includes: and calculating the ground flatness of the ground area corresponding to the first subset according to the variance of the ground point data in the first subset.
Further optionally, calculating a gully depth of the first subset corresponding to the ground area according to the ground point data in the first subset, including: determining ground point data which are positioned on the same straight line in the ground area corresponding to the first subset according to the x-axis coordinate and the y-axis coordinate of the ground point data in the first subset; determining whether ravines exist in the ground area corresponding to the first subset and the depth of the ravines when the ravines exist according to the z-axis coordinate of the ground point data located on the same straight line; and the three-axis coordinates of the ground point data are coordinates in a world coordinate system.
Further optionally, calculating a ravine width of the first subset corresponding to the ground area according to the ground point data in the first subset, including any one of:
calculating the difference of x-axis coordinates of ground point data on adjacent ravines by taking the x-axis as a reference, and taking the difference as the width of the adjacent ravines in the x-axis direction;
and calculating the difference of the y-axis coordinates of the data of the ground points on adjacent ravines by taking the y-axis as a reference, and taking the difference as the width of the adjacent ravines in the y-axis direction.
Further optionally, calculating a ravine orientation of the first subset corresponding to the ground area based on the ground point data in the first subset, comprising: calculating an included angle between each ravine in the ground area corresponding to the first subset and the reference axis by taking any coordinate axis as the reference axis; the orientation of each ravine is determined based on the included angle and the orientation of the reference axis.
In an alternative embodiment, identifying the ground material category in the working area according to the ground texture features in the working area comprises: clustering the plurality of subsets according to the texture features of the ground areas corresponding to the plurality of subsets respectively to obtain at least one clustering result; wherein each clustering result represents a surface material category.
In an alternative embodiment, after identifying the ground material category in the working area, the method further includes: marking the ground material category at a corresponding position in an environment map corresponding to the operation area; wherein, the mark information corresponding to different ground material categories is different.
In an alternative embodiment, identifying a ground boundary within the work area based on the ground three-dimensional point cloud data comprises: calculating the ground terrain features in the operation area according to the ground three-dimensional point cloud data; and identifying a ground boundary in the working area according to the ground topographic characteristics and the boundary of the working area.
Further optionally, identifying a ground boundary within the work area in combination with the boundary of the work area based on the ground topography characteristics comprises: identifying a candidate area with boundary line characteristics in the operation area according to the ground terrain characteristics; and if the candidate area is connected with the boundary of the operation area, determining the candidate area as a ground boundary.
Further optionally, after identifying the ground boundary within the working area, the method further includes: and marking the ground boundary at the position corresponding to the candidate area in the environment map corresponding to the operation area.
In an alternative embodiment, the partitioning the work area based on the ground material category and/or the ground boundary includes: acquiring an environment map corresponding to a working area, wherein the environment map comprises a plurality of initial partitions divided according to the boundary of the working area; and correcting the plurality of initial subareas according to the ground material type and/or the ground boundary to obtain a target subarea included in the working area.
Optionally, modifying the plurality of initial partitions according to the ground material category to obtain a target partition included in the working area, including: and merging the plurality of initial subareas based on the ground material category to obtain a target subarea included in the working area.
Further optionally, merging the multiple initial partitions based on the ground material category to obtain a target partition included in the working area, where the method includes: and for two adjacent initial partitions, if the ground material types of the two initial partitions are the same and have connectivity, combining the two initial partitions into one partition to obtain a target partition contained in the working area.
Optionally, modifying the plurality of initial partitions according to the ground boundary to obtain a target partition included in the working area, including: and splitting the plurality of initial partitions based on the ground boundary to obtain target partitions contained in the working area.
Further optionally, splitting the multiple initial partitions based on the ground boundary to obtain a target partition included in the working area, including: determining a to-be-processed partition containing a ground boundary in a plurality of initial partitions; and dividing the partition to be processed into at least two partitions by taking a ground boundary contained in the partition to be processed as a boundary to obtain a target partition contained in the working area.
Optionally, modifying the plurality of initial partitions according to the ground material category and the ground boundary to obtain a target partition included in the working area, including:
merging the plurality of initial partitions according to the ground material category to obtain merged partitions; splitting the merged subareas according to the ground boundary line to obtain target subareas contained in the operation area;
or
Splitting a plurality of initial partitions according to a ground boundary to obtain split partitions; and merging the split subareas according to the ground material category to obtain a target subarea included in the operation area.
For detailed implementation of each step in the embodiments of the present application, reference may be made to the description of the above embodiments, which is not repeated herein.
According to the partitioning method, the autonomous mobile equipment collects ground three-dimensional point cloud data, the material type and the ground boundary line of the ground are identified according to the ground three-dimensional point cloud data, the operation area is reasonably partitioned in a combining and splitting mode, the problem of mistaken partitioning of the autonomous mobile equipment is solved, the operation area is favorably and accurately partitioned, accordingly, the flexibility, the efficiency and/or the quality of the autonomous mobile equipment when the autonomous mobile equipment executes operation tasks based on partitioning are improved, and the task execution effect is improved.
No matter which partitioning method is adopted to obtain the partitioning result, the accuracy and the precision of the partitioning result are higher. The environment map including the partitioning result has various application scenarios, and can be applied to fixed-point cleaning, fixed-point monitoring, fixed-point positioning, and the like. The fixed-point cleaning refers to cleaning a certain zone or certain zones based on the zones included in the environment map, and the user can select to clean the certain zone or certain zones, and can even set cleaning time, cleaning frequency and the like. The fixed-point monitoring refers to that a user can select to monitor a certain partition or a certain partition based on the partition contained in the environment map. Pointing location refers to the location of an autonomous mobile device within a particular zone that a user can locate based on the zones contained in the environment map. The following describes a fixed-point sweeping process based on an environment map, taking fixed-point sweeping as an example. The fixed-point cleaning method of the embodiment can be executed by the cleaning robot.
The cleaning robot responds to the fixed point cleaning triggering event and can determine a fixed point cleaning area, and the fixed point cleaning area is one of the at least one subarea; and then may move to the spot-cleaning area where the cleaning task is performed. Taking a home environment as an example, the fixed-point cleaning area can be a living room, a kitchen, a toilet and other partitions; the sweeping robot can sweep the subareas in a targeted manner, the sweeping efficiency is improved, the sweeping robot is prevented from sweeping all the subareas comprehensively, and resources are saved.
Alternatively, the fixed-point cleaning triggering event may be an event that a voice cleaning instruction issued by the user is received.
Based on the method, the sweeping robot can receive a voice sweeping instruction of a user, wherein the voice sweeping instruction comprises an identifier of a subarea needing to be swept at a fixed point; and then, determining the position of the fixed point cleaning area according to the identifier of the partition which needs fixed point cleaning and is contained in the voice cleaning instruction and the identifier of each partition in the environment map corresponding to the working area. For example, the identifier of the partition included in the voice cleaning instruction may be matched with the identifier of each partition in the environment map, and the partition and the position thereof in the matching may be set as the fixed-point cleaning area and the position thereof, respectively. Then, the cleaning robot moves to the fixed-point cleaning area, and performs a cleaning task in the fixed-point cleaning area.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 1001 to 1003 may be device a; for another example, the executing agent of steps 1001 and 1002 may be device a, and the executing agent of step 1003 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the order of the operations such as 1001, 1002, etc. is merely used for distinguishing different operations, and the order itself does not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 11 is a schematic structural diagram of a computing device according to an exemplary embodiment of the present application. As shown in fig. 11, the computing device includes: one or more memories 111 and one or more processors 112.
One or more memories 111 for storing computer programs and may be configured to store various other data to support operations on the computing device. Examples of such data include instructions, messages, pictures, videos, etc. for any application or method operating on a computing device.
One or more processors 112 coupled with the one or more memories 111 for executing computer programs stored in the one or more memories 111 for: acquiring ground three-dimensional point cloud data in an autonomous mobile equipment operation area; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
The ground three-dimensional point cloud data can be acquired by a three-dimensional sensor such as a structured light module, an area array laser sensor or a three-dimensional vision sensor on the autonomous mobile equipment. For the structure of the structured light module and the implementation structure and the operation principle of the area array laser sensor, reference may be made to the description of the foregoing embodiments, and further description is omitted here.
In some alternative embodiments, the one or more processors 112, when identifying ground feature information within the work area based on the ground three-dimensional point cloud data, are specifically configured to: identifying the ground material category and/or the ground boundary in the operation area based on the ground three-dimensional point cloud data; correspondingly, when the operation area is partitioned according to the ground characteristic information, the method is specifically configured to: the work area is partitioned based on the ground material category and/or the ground boundary.
In some alternative embodiments, the one or more processors 112, when identifying the ground material category within the work area based on the ground three-dimensional point cloud data, are specifically configured to: calculating ground texture features in the operation area according to the ground three-dimensional point cloud data; and identifying the ground material type in the operation area according to the ground texture characteristics in the operation area.
In some alternative embodiments, the one or more processors 112, when computing the ground texture features within the work area from the ground three-dimensional point cloud data, are specifically configured to: dividing the ground three-dimensional point cloud data into a plurality of subsets according to the acquisition time of the ground three-dimensional point cloud data; each subset comprises a plurality of ground point data collected by a three-dimensional sensor (such as a structured light module) in the same time period; and calculating the texture characteristics of the ground areas corresponding to the subsets according to the ground point data in the subsets.
In some optional embodiments, the one or more processors 112, when calculating the texture features of the ground area corresponding to each of the plurality of subsets according to the ground point data in the plurality of subsets, are specifically configured to: for the first subset, calculating at least one texture feature of the ground flatness, the ravine depth, the ravine width and the ravine orientation of the ground region corresponding to the first subset according to the ground point data in the first subset; wherein the first subset is any one of a plurality of subsets.
In some optional embodiments, the one or more processors 112, when calculating the ground flatness of the ground area corresponding to the first subset based on the ground point data in the first subset, are specifically configured to: and calculating the ground flatness of the ground area corresponding to the first subset according to the variance of the ground point data in the first subset.
In some optional embodiments, the one or more processors 112, when calculating the ravine depth of the first subset corresponding to the ground area based on the ground point data in the first subset, are specifically configured to: determining ground point data which are positioned on the same straight line in the ground area corresponding to the first subset according to the x-axis coordinate and the y-axis coordinate of the ground point data in the first subset; determining whether ravines exist in the ground area corresponding to the first subset and the depth of the ravines when the ravines exist according to the z-axis coordinate of the ground point data located on the same straight line; and the three-axis coordinates of the ground point data are coordinates in a world coordinate system.
In some alternative embodiments, the one or more processors 112 may be further configured to calculate a ravine width of the first subset corresponding to the ground area based on the ground point data in the first subset, and further configured to: calculating the difference of x-axis coordinates of ground point data on adjacent ravines by taking the x-axis as a reference, and taking the difference as the width of the adjacent ravines in the x-axis direction; and calculating the difference of the y-axis coordinates of the data of the ground points on adjacent ravines by taking the y-axis as a reference, and taking the difference as the width of the adjacent ravines in the y-axis direction.
In some optional embodiments, the one or more processors 112, when calculating the ravine orientation of the first subset corresponding to the ground area based on the ground point data in the first subset, are specifically configured to: calculating an included angle between each ravine in the ground area corresponding to the first subset and the reference axis by taking any coordinate axis as the reference axis; the orientation of each ravine is determined based on the included angle and the orientation of the reference axis.
In some alternative embodiments, the one or more processors 112, when identifying the ground material category within the work area based on the ground texture features within the work area, are specifically configured to: clustering the plurality of subsets according to the texture features of the ground areas corresponding to the plurality of subsets respectively to obtain at least one clustering result; wherein each clustering result represents a surface material category.
In some alternative embodiments, the one or more processors 112, after identifying the ground material category within the work area, are further configured to: marking the ground material category at a corresponding position in an environment map corresponding to the operation area; wherein, the mark information corresponding to different ground material categories is different.
In some alternative embodiments, the one or more processors 112, when identifying a ground boundary within the work area based on the ground three-dimensional point cloud data, are specifically configured to: calculating the ground terrain features in the operation area according to the ground three-dimensional point cloud data; and identifying a ground boundary in the working area according to the ground topographic characteristics and the boundary of the working area.
In some alternative embodiments, the one or more processors 112, when identifying a ground boundary within the work area based on the ground terrain characteristics in combination with the boundaries of the work area, are specifically configured to: identifying a candidate area with boundary line characteristics in the operation area according to the ground terrain characteristics; and if the candidate area is connected with the boundary of the operation area, determining the candidate area as the ground boundary.
In some alternative embodiments, the one or more processors 112, after identifying the ground dividing line within the work area, are further configured to: and marking the ground boundary at the position corresponding to the candidate area in the environment map corresponding to the operation area.
In some alternative embodiments, the one or more processors 112, when partitioning the work area based on the ground material category and/or the ground boundary, are specifically configured to: acquiring an environment map corresponding to a working area, wherein the environment map comprises a plurality of initial partitions divided according to the boundary of the working area; and correcting the plurality of initial subareas according to the ground material type and/or the ground boundary to obtain a target subarea included in the working area.
In some optional embodiments, when the one or more processors 112 modify the plurality of initial partitions according to the ground material category to obtain a target partition included in the working area, the one or more processors are specifically configured to: and merging the plurality of initial subareas based on the ground material category to obtain a target subarea included in the working area.
In some optional embodiments, when the one or more processors 112 merge the plurality of initial partitions based on the ground material category to obtain the target partition included in the working area, the one or more processors are specifically configured to: and for two adjacent initial subareas, if the two initial subareas have the same ground material type and a connected domain, combining the two initial subareas into one subarea to obtain a target subarea contained in the working area.
In some alternative embodiments, when modifying the plurality of initial partitions according to the ground boundary to obtain the target partition included in the working area, the one or more processors 112 are specifically configured to: and splitting the plurality of initial partitions based on the ground boundary to obtain target partitions contained in the working area.
In some optional embodiments, when the one or more processors 112 split the multiple initial partitions based on the ground boundary to obtain the target partition included in the working area, the one or more processors are specifically configured to: determining a to-be-processed partition containing a ground boundary in a plurality of initial partitions; and dividing the partition to be processed into at least two partitions by taking a ground boundary contained in the partition to be processed as a boundary to obtain a target partition contained in the working area.
In some optional embodiments, when the one or more processors 112 modify the plurality of initial partitions according to the ground material category and the ground boundary to obtain a target partition included in the working area, the one or more processors are specifically configured to: merging the plurality of initial partitions according to the ground material category to obtain merged partitions; splitting the merged subareas according to the ground boundary line to obtain target subareas contained in the operation area; or splitting a plurality of initial partitions according to the ground boundary to obtain split partitions; and merging the split subareas according to the ground material category to obtain a target subarea included in the operation area.
Further, as shown in fig. 11, the computing device further includes: communication components 113, display 114, power components 115, audio components 116, and the like. Only some of the components are schematically shown in fig. 11, and the computing device is not meant to include only the components shown in fig. 11. In addition, the components within the dashed box in FIG. 11 are optional components, not required components, and may depend on the product form of the computing device. The computing device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the computing device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the computing device may include components within a dashed-line frame in fig. 11; if the computing device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, components within the dashed box in fig. 11 may not be included.
The computing device provided in this embodiment may be a device provided by an autonomous mobile device provider, such as a server device that provides services for the autonomous mobile device; or may be a device provided by a third party, such as a third party server device.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: acquiring ground three-dimensional point cloud data in an operation area of the autonomous mobile equipment, wherein the ground three-dimensional point cloud data is acquired by a structured light module on the autonomous mobile equipment; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; partitioning the operation area according to the ground characteristic information; the structured light module comprises a line laser transmitter and a camera module.
In addition to the above-described actions, when the computer instructions are executed by one or more processors, the one or more processors may be further caused to perform other actions, which may be described in detail in the method shown in fig. 5a and will not be described again here.
Fig. 12 is a schematic structural diagram of an autonomous mobile device according to an exemplary embodiment of the present application. As shown in fig. 12, the autonomous mobile apparatus includes: the device comprises a device body 120, wherein one or more memories 121, one or more processors 122 and a three-dimensional sensor 129 are arranged on the device body 120; the three-dimensional sensor 129 may be a structured light module 123, an area array laser sensor 127, or a three-dimensional vision sensor 128. As shown in fig. 12, the structured light module 123 includes: a camera module 1231 and a line laser transmitter 1232. In fig. 12, the line laser emitters 1232 are illustrated as being distributed on both sides of the camera module 1231, but the present invention is not limited thereto. For other implementation structures of the structured light module 123, reference may be made to the description in the foregoing embodiments, and further description is omitted here.
The device body 120 represents an appearance of the autonomous mobile device to some extent. In the present embodiment, the external appearance of the autonomous mobile device is not limited, and may be, for example, a circle, an ellipse, a triangle, a convex polygon, or the like.
One or more memories 121, among other things, for storing computer programs and may be configured to store other various data to support operations on the autonomous mobile device. Examples of such data include instructions for any application or method operating on the autonomous mobile device, map data of the environment/scene in which the autonomous mobile device is located, operating modes, operating parameters, and so forth.
One or more processors 122, which may be considered control systems for autonomous mobile devices, may be configured to execute computer instructions stored in one or more memories 121 to: collecting ground three-dimensional point cloud data in the operation area by using a three-dimensional sensor 129; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
In some optional embodiments, the one or more processors 122, when identifying ground feature information within the work area based on the ground three-dimensional point cloud data, are specifically configured to: identifying the ground material category and/or the ground boundary in the operation area based on the ground three-dimensional point cloud data; correspondingly, when the operation area is partitioned according to the ground characteristic information, the method is specifically configured to: the work area is partitioned based on the ground material category and/or the ground boundary.
In some alternative embodiments, the one or more processors 122, when identifying the ground material category within the work area based on the ground three-dimensional point cloud data, are specifically configured to: calculating ground texture features in the operation area according to the ground three-dimensional point cloud data; and identifying the ground material type in the operation area according to the ground texture characteristics in the operation area.
In some alternative embodiments, the one or more processors 122, when computing the ground texture features within the work area from the ground three-dimensional point cloud data, are specifically configured to: dividing the ground three-dimensional point cloud data into a plurality of subsets according to the acquisition time of the ground three-dimensional point cloud data; each subset comprises a plurality of ground point data collected by the three-dimensional sensor 129 in the same time period; and calculating the texture characteristics of the ground areas corresponding to the subsets according to the ground point data in the subsets.
In some optional embodiments, the one or more processors 122, when calculating the texture features of the ground area corresponding to each of the plurality of subsets according to the ground point data in the plurality of subsets, are specifically configured to: for the first subset, calculating at least one texture feature of the ground flatness, the ravine depth, the ravine width and the ravine orientation of the ground region corresponding to the first subset according to the ground point data in the first subset; wherein the first subset is any one of a plurality of subsets.
In some optional embodiments, the one or more processors 122, when calculating the ground flatness of the ground area corresponding to the first subset based on the ground point data in the first subset, are specifically configured to: and calculating the ground flatness of the ground area corresponding to the first subset according to the variance of the ground point data in the first subset.
In some optional embodiments, the one or more processors 122, when calculating the ravine depth of the first subset corresponding to the ground area based on the ground point data in the first subset, are specifically configured to: determining ground point data which are positioned on the same straight line in the ground area corresponding to the first subset according to the x-axis coordinate and the y-axis coordinate of the ground point data in the first subset; determining whether ravines exist in the ground area corresponding to the first subset and the depth of the ravines when the ravines exist according to the z-axis coordinate of the ground point data located on the same straight line; and the three-axis coordinates of the ground point data are coordinates in a world coordinate system.
In some alternative embodiments, the one or more processors 122 may be further configured to calculate a width of the ravines of the first subset corresponding to the ground area based on the ground point data in the first subset, and further configured to: calculating the difference of x-axis coordinates of ground point data on adjacent ravines by taking the x-axis as a reference, and taking the difference as the width of the adjacent ravines in the x-axis direction; and calculating the difference of the y-axis coordinates of the data of the ground points on adjacent ravines by taking the y-axis as a reference, and taking the difference as the width of the adjacent ravines in the y-axis direction.
In some optional embodiments, the one or more processors 122, when calculating the ravine orientation of the first subset corresponding to the ground area based on the ground point data in the first subset, are specifically configured to: calculating an included angle between each ravine in the ground area corresponding to the first subset and the reference axis by taking any coordinate axis as the reference axis; the orientation of each ravine is determined based on the included angle and the orientation of the reference axis.
In some alternative embodiments, the one or more processors 122, when identifying the ground material category within the work area based on the ground texture features within the work area, are specifically configured to: clustering the plurality of subsets according to the texture features of the ground areas corresponding to the plurality of subsets respectively to obtain at least one clustering result; wherein each clustering result represents a surface material category.
In some alternative embodiments, the one or more processors 122, after identifying the ground material category within the work area, are further configured to: marking the ground material category at a corresponding position in an environment map corresponding to the operation area; wherein, the mark information corresponding to different ground material categories is different.
In some alternative embodiments, the one or more processors 122, when identifying a ground boundary within the work area based on the ground three-dimensional point cloud data, are specifically configured to: calculating the ground terrain features in the operation area according to the ground three-dimensional point cloud data; and identifying a ground boundary in the working area according to the ground topographic characteristics and the boundary of the working area.
In some alternative embodiments, the one or more processors 122, when identifying a ground boundary within the work area based on the ground topography characteristics in combination with the boundaries of the work area, are specifically configured to: identifying a candidate area with boundary line characteristics in the operation area according to the ground terrain characteristics; and if the candidate area is connected with the boundary of the operation area, determining the candidate area as the ground boundary.
In some alternative embodiments, the one or more processors 122, after identifying the ground dividing line within the work area, are further configured to: and marking the ground boundary at the position corresponding to the candidate area in the environment map corresponding to the operation area.
In some alternative embodiments, the one or more processors 122, when partitioning the work area based on the ground material category and/or the ground boundary, are specifically configured to: acquiring an environment map corresponding to a working area, wherein the environment map comprises a plurality of initial partitions divided according to the boundary of the working area; and correcting the plurality of initial subareas according to the ground material type and/or the ground boundary to obtain a target subarea included in the working area.
In some optional embodiments, when the one or more processors 122 modify the plurality of initial partitions according to the ground material category to obtain a target partition included in the working area, the one or more processors are specifically configured to: and merging the plurality of initial subareas based on the ground material category to obtain a target subarea included in the working area.
In some optional embodiments, when the one or more processors 122 merge the plurality of initial partitions based on the ground material category to obtain the target partition included in the working area, the one or more processors are specifically configured to: and for two adjacent initial subareas, if the two initial subareas have the same ground material type and a connected domain, combining the two initial subareas into one subarea to obtain a target subarea contained in the working area.
In some optional embodiments, when the one or more processors 122 modify the plurality of initial partitions according to the ground boundary to obtain the target partition included in the working area, the one or more processors are specifically configured to: and splitting the plurality of initial partitions based on the ground boundary to obtain target partitions contained in the working area.
In some optional embodiments, when the one or more processors 122 split the multiple initial partitions based on the ground boundary to obtain the target partition included in the working area, the one or more processors are specifically configured to: determining a to-be-processed partition containing a ground boundary in a plurality of initial partitions; and dividing the partition to be processed into at least two partitions by taking a ground boundary contained in the partition to be processed as a boundary to obtain a target partition contained in the working area.
In some optional embodiments, when the one or more processors 122 modify the plurality of initial partitions according to the ground material category and the ground boundary to obtain a target partition included in the working area, the one or more processors are specifically configured to: merging the plurality of initial partitions according to the ground material category to obtain merged partitions; splitting the merged subareas according to the ground boundary line to obtain target subareas contained in the operation area; or splitting a plurality of initial partitions according to the ground boundary to obtain split partitions; and merging the split subareas according to the ground material category to obtain a target subarea included in the operation area.
Further, the autonomous mobile device of the present embodiment may include some basic components, such as a communication component 124, a power component 125, a driving component 126, and the like, in addition to the various components mentioned above. Drive assembly 125 may include drive wheels, drive motors, universal wheels, and the like. Further optionally, the autonomous mobile device may also include a display and an audio component.
Alternatively, the self-moving device of the present embodiment may be a robot, a cleaner, an unmanned vehicle, or the like. In an optional embodiment, the autonomous mobile apparatus of this embodiment may be implemented as a sweeping robot, and in the case of implementing as a sweeping robot, the autonomous mobile apparatus may further include: a striker plate, a vision sensor or LDS sensor, a look-down sensor, a cleaning assembly, etc. The vision sensor may be a camera, or the like. The cleaning assembly may include a cleaning motor, a cleaning brush, a dusting brush, a dust suction fan, etc. These basic components and the configurations of the basic components contained in different autonomous mobile devices are different, and the embodiments of the present application are only some examples.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: collecting ground three-dimensional point cloud data in an operation area by using a structured light module; identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data; and partitioning the operation area according to the ground characteristic information.
In addition to the above-described actions, when the computer instructions are executed by one or more processors, the one or more processors may be caused to perform other actions, which may be described in detail in the method shown in fig. 10 and will not be described again here.
The memories in the above figures may be implemented as any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The communication component in the above embodiments is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The display in the above embodiments includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply components in the embodiments described above provide power to the various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component in the above embodiments may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (28)

1. A method of partitioning, comprising:
acquiring ground three-dimensional point cloud data in an autonomous mobile equipment operation area;
identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data;
and partitioning the operation area according to the ground characteristic information.
2. The method of claim 1, wherein identifying ground feature information within the work area based on the ground three-dimensional point cloud data comprises:
identifying the ground material category and/or the ground boundary in the operation area based on the ground three-dimensional point cloud data;
correspondingly, according to the ground characteristic information, partitioning the operation area includes: partitioning the work area based on the ground material category and/or a ground boundary.
3. The method of claim 2, wherein identifying the ground material category within the work area based on the ground three-dimensional point cloud data comprises:
calculating ground texture features in the operation area according to the ground three-dimensional point cloud data;
and identifying the ground material category in the operation area according to the ground texture features in the operation area.
4. The method of claim 3, wherein computing a ground texture feature within the work area from the ground three-dimensional point cloud data comprises:
dividing the ground three-dimensional point cloud data into a plurality of subsets according to the acquisition time of the ground three-dimensional point cloud data; each subset comprises a plurality of ground point data collected in the same time period;
and calculating texture features of the ground areas corresponding to the subsets according to the ground point data in the subsets.
5. The method of claim 4 wherein computing texture features for the ground regions corresponding to each of the plurality of subsets from the ground point data in the plurality of subsets comprises:
for a first subset, calculating at least one texture feature of the first subset, which corresponds to at least one of ground flatness, ravine depth, ravine width and ravine orientation of a ground region, according to ground point data in the first subset; wherein the first subset is any one of the plurality of subsets.
6. The method of claim 5 wherein calculating the ground flatness of the ground area corresponding to the first subset from the ground point data in the first subset comprises:
and calculating the ground flatness of the ground area corresponding to the first subset according to the variance of the ground point data in the first subset.
7. The method of claim 5, wherein calculating a gully depth for the first subset corresponding to a ground area based on the ground point data in the first subset comprises:
determining ground point data which are positioned on the same straight line in the ground area corresponding to the first subset according to the x-axis coordinate and the y-axis coordinate of the ground point data in the first subset;
determining whether ravines exist in the ground area corresponding to the first subset and the depth of the ravines when the ravines exist according to the z-axis coordinate of the ground point data located on the same straight line;
and the three-axis coordinates of the ground point data are coordinates in a world coordinate system.
8. The method of claim 7, wherein calculating a ravine width for the first subset corresponding to a ground area based on ground point data in the first subset comprises any of:
calculating the difference of x-axis coordinates of ground point data on adjacent ravines by taking the x-axis as a reference, and taking the difference as the width of the adjacent ravines in the x-axis direction;
and calculating the difference of the y-axis coordinates of the data of the ground points on adjacent ravines by taking the y-axis as a reference, and taking the difference as the width of the adjacent ravines in the y-axis direction.
9. The method of claim 7, wherein calculating a ravine orientation for the first subset corresponding to a ground area based on ground point data in the first subset comprises:
calculating an included angle between each ravine in the ground area corresponding to the first subset and the reference axis by taking any coordinate axis as a reference axis;
and determining the orientation of each gully according to the included angle and the orientation of the reference shaft.
10. The method of claim 4, wherein identifying the ground material category within the work area based on the ground texture features within the work area comprises:
clustering the plurality of subsets according to the texture features of the ground areas corresponding to the plurality of subsets respectively to obtain at least one clustering result; wherein each clustering result represents a surface material category.
11. The method according to any one of claims 2-10, further comprising, after identifying the ground material category within the work area:
marking the ground material category at a corresponding position in an environment map corresponding to the operation area; wherein, the mark information corresponding to different ground material categories is different.
12. The method of any one of claims 2-10, wherein identifying a ground boundary within the work area based on the ground three-dimensional point cloud data comprises:
calculating the ground terrain features in the operation area according to the ground three-dimensional point cloud data;
and according to the ground terrain features, combining the boundary of the operation area, and identifying a ground boundary line in the operation area.
13. The method of claim 12, wherein identifying a ground boundary within the work area based on the ground topography feature in combination with the boundary of the work area comprises:
according to the ground terrain features, identifying a candidate area with boundary line features in the operation area;
and if the candidate area is connected with the boundary of the operation area, determining the candidate area as the ground boundary.
14. The method of claim 13, further comprising, after identifying a ground boundary within the work area:
and marking the ground boundary at a position corresponding to the candidate area in the environment map corresponding to the operation area.
15. The method according to any one of claims 2-10, wherein partitioning the work area based on the ground material category and/or ground boundary comprises:
acquiring an environment map corresponding to the operation area, wherein the environment map comprises a plurality of initial partitions divided according to the boundary of the operation area;
and correcting the plurality of initial subareas according to the ground material type and/or the ground boundary to obtain a target subarea included in the working area.
16. The method of claim 15, wherein modifying the plurality of initial zones to obtain a target zone included in the work area based on the ground material category comprises:
and merging the plurality of initial subareas based on the ground material category to obtain a target subarea included in the operation area.
17. The method of claim 16, wherein merging the plurality of initial partitions based on the ground material category to obtain a target partition included in the working area comprises:
and for two adjacent initial partitions, if the ground material types of the two initial partitions are the same and have connectivity, combining the two initial partitions into one partition to obtain a target partition contained in the working area.
18. The method of claim 15, wherein modifying the plurality of initial partitions to obtain target partitions included in the work area based on the ground boundary comprises:
and splitting the plurality of initial partitions based on the ground boundary to obtain target partitions contained in the working area.
19. The method of claim 18, wherein splitting the plurality of initial partitions based on the ground boundary to obtain a target partition included in the working area comprises:
determining a to-be-processed partition of the plurality of initial partitions containing the ground boundary;
and dividing the partition to be processed into at least two partitions by taking the ground boundary contained in the partition to be processed as a boundary to obtain a target partition contained in the working area.
20. The method of claim 15, wherein modifying the plurality of initial segments to obtain a target segment included in the work area based on the ground material category and a ground boundary comprises:
merging the plurality of initial partitions according to the ground material category to obtain merged partitions; splitting the merged subarea according to the ground boundary line to obtain a target subarea contained in the operation area;
or
Splitting the plurality of initial partitions according to the ground boundary to obtain split partitions; and merging the split subareas according to the ground material category to obtain a target subarea contained in the operation area.
21. The method of any one of claims 1-10, wherein obtaining ground three-dimensional point cloud data within an autonomous mobile device work area comprises:
receiving the ground three-dimensional point cloud data reported by the autonomous mobile equipment, wherein the ground three-dimensional point cloud data is acquired by a structured light module on the autonomous mobile equipment; the structured light module comprises a line laser transmitter and a camera module.
22. A method of sectorization, applicable to an autonomous mobile device, the method comprising:
collecting ground three-dimensional point cloud data in a working area;
identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data;
and partitioning the operation area according to the ground characteristic information.
23. The method of claim 22, wherein the autonomous mobile device front side is mounted with a structured light module comprising a line laser emitter and a camera module;
the ground three-dimensional point cloud data in the collection operation area comprises: and collecting ground three-dimensional point cloud data in the operation area by using the structured light module.
24. A computing device, comprising: one or more memories and one or more processors;
the one or more memories for storing a computer program; the one or more processors coupled with the one or more memories for executing the computer programs for:
acquiring ground three-dimensional point cloud data in an autonomous mobile equipment operation area;
identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data;
and partitioning the operation area according to the ground characteristic information.
25. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to implement the steps of the method of any one of claims 1-21.
26. An autonomous mobile device, comprising: the device comprises a device body, wherein one or more processors and one or more memories for storing computer programs are arranged on the device body;
the one or more processors coupled with the one or more memories for executing the computer programs for:
collecting ground three-dimensional point cloud data in a working area;
identifying ground characteristic information in the operation area based on the ground three-dimensional point cloud data;
and partitioning the operation area according to the ground characteristic information.
27. The apparatus of claim 26, wherein the apparatus body further comprises a structured light module, the structured light module comprises a line laser emitter and a camera module; the one or more processors are specifically configured to: and collecting ground three-dimensional point cloud data in the operation area by using the structured light module.
28. The apparatus of claim 27, wherein the line laser emitters are at least two, the at least two line laser emitters being distributed on either side of the camera module.
CN201911398734.9A 2019-12-30 2019-12-30 Partitioning method, partitioning equipment and storage medium Active CN111123278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911398734.9A CN111123278B (en) 2019-12-30 2019-12-30 Partitioning method, partitioning equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398734.9A CN111123278B (en) 2019-12-30 2019-12-30 Partitioning method, partitioning equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111123278A true CN111123278A (en) 2020-05-08
CN111123278B CN111123278B (en) 2022-07-12

Family

ID=70505535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398734.9A Active CN111123278B (en) 2019-12-30 2019-12-30 Partitioning method, partitioning equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111123278B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111528739A (en) * 2020-05-09 2020-08-14 小狗电器互联网科技(北京)股份有限公司 Sweeping mode switching method and system, electronic equipment, storage medium and sweeper
CN111973075A (en) * 2020-08-21 2020-11-24 苏州三六零机器人科技有限公司 Floor sweeping method and device based on house type graph, sweeper and computer medium
WO2021135392A1 (en) * 2019-12-30 2021-07-08 科沃斯机器人股份有限公司 Structured light module and autonomous moving apparatus
CN113341752A (en) * 2021-06-25 2021-09-03 杭州萤石软件有限公司 Intelligent door lock and cleaning robot linkage method and intelligent home system
CN113475978A (en) * 2021-06-23 2021-10-08 深圳乐动机器人有限公司 Robot recognition control method and device, robot and storage medium
CN116300974A (en) * 2023-05-18 2023-06-23 科沃斯家用机器人有限公司 Operation planning, partitioning, operation method, autonomous mobile device and cleaning robot
WO2023155732A1 (en) * 2022-02-21 2023-08-24 追觅创新科技(苏州)有限公司 Area information processing method and apparatus, storage medium, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (en) * 2010-08-31 2011-01-19 东南大学 Building method of transformer substation three-dimensional model by using laser three-dimensional scanning technique
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN104359430A (en) * 2014-10-14 2015-02-18 华南农业大学 Laser-ranging-based dynamic paddy field flatness detection device and method thereof
WO2018119902A1 (en) * 2016-12-29 2018-07-05 华为技术有限公司 Method and apparatus for detecting ground environment
CN108960060A (en) * 2018-06-01 2018-12-07 东南大学 A kind of automatic driving vehicle pavement texture identifying system and method
CN109074490A (en) * 2018-07-06 2018-12-21 深圳前海达闼云端智能科技有限公司 Path detection method, related device and computer readable storage medium
CN109190573A (en) * 2018-09-12 2019-01-11 百度在线网络技术(北京)有限公司 A kind of ground detection method, apparatus, electronic equipment, vehicle and storage medium
CN109934228A (en) * 2019-03-18 2019-06-25 上海盎维信息技术有限公司 3D point cloud processing method and processing device based on artificial intelligence
CN110084116A (en) * 2019-03-22 2019-08-02 深圳市速腾聚创科技有限公司 Pavement detection method, apparatus, computer equipment and storage medium
CN110378246A (en) * 2019-06-26 2019-10-25 深圳前海达闼云端智能科技有限公司 Ground detection method, apparatus, computer readable storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950433A (en) * 2010-08-31 2011-01-19 东南大学 Building method of transformer substation three-dimensional model by using laser three-dimensional scanning technique
CN103645480A (en) * 2013-12-04 2014-03-19 北京理工大学 Geographic and geomorphic characteristic construction method based on laser radar and image data fusion
CN104359430A (en) * 2014-10-14 2015-02-18 华南农业大学 Laser-ranging-based dynamic paddy field flatness detection device and method thereof
WO2018119902A1 (en) * 2016-12-29 2018-07-05 华为技术有限公司 Method and apparatus for detecting ground environment
CN108960060A (en) * 2018-06-01 2018-12-07 东南大学 A kind of automatic driving vehicle pavement texture identifying system and method
CN109074490A (en) * 2018-07-06 2018-12-21 深圳前海达闼云端智能科技有限公司 Path detection method, related device and computer readable storage medium
CN109190573A (en) * 2018-09-12 2019-01-11 百度在线网络技术(北京)有限公司 A kind of ground detection method, apparatus, electronic equipment, vehicle and storage medium
CN109934228A (en) * 2019-03-18 2019-06-25 上海盎维信息技术有限公司 3D point cloud processing method and processing device based on artificial intelligence
CN110084116A (en) * 2019-03-22 2019-08-02 深圳市速腾聚创科技有限公司 Pavement detection method, apparatus, computer equipment and storage medium
CN110378246A (en) * 2019-06-26 2019-10-25 深圳前海达闼云端智能科技有限公司 Ground detection method, apparatus, computer readable storage medium and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135392A1 (en) * 2019-12-30 2021-07-08 科沃斯机器人股份有限公司 Structured light module and autonomous moving apparatus
CN111528739A (en) * 2020-05-09 2020-08-14 小狗电器互联网科技(北京)股份有限公司 Sweeping mode switching method and system, electronic equipment, storage medium and sweeper
CN111973075A (en) * 2020-08-21 2020-11-24 苏州三六零机器人科技有限公司 Floor sweeping method and device based on house type graph, sweeper and computer medium
CN113475978A (en) * 2021-06-23 2021-10-08 深圳乐动机器人有限公司 Robot recognition control method and device, robot and storage medium
CN113341752A (en) * 2021-06-25 2021-09-03 杭州萤石软件有限公司 Intelligent door lock and cleaning robot linkage method and intelligent home system
WO2023155732A1 (en) * 2022-02-21 2023-08-24 追觅创新科技(苏州)有限公司 Area information processing method and apparatus, storage medium, and electronic device
CN116300974A (en) * 2023-05-18 2023-06-23 科沃斯家用机器人有限公司 Operation planning, partitioning, operation method, autonomous mobile device and cleaning robot

Also Published As

Publication number Publication date
CN111123278B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN111123278B (en) Partitioning method, partitioning equipment and storage medium
CN109998421B (en) Mobile cleaning robot assembly and durable mapping
CN111093019A (en) Terrain recognition, traveling and map construction method, equipment and storage medium
US11407116B2 (en) Robot and operation method therefor
JP6772129B2 (en) Systems and methods for the use of optical mileage sensors in mobile robots
CN112739244B (en) Mobile robot cleaning system
CN109947109B (en) Robot working area map construction method and device, robot and medium
CN111142526A (en) Obstacle crossing and operation method, equipment and storage medium
US11400595B2 (en) Robotic platform with area cleaning mode
CN106780735B (en) Semantic map construction method and device and robot
US11647885B2 (en) Robot vacuum cleaner and cleaning route planning method thereof
EP3104194B1 (en) Robot positioning system
KR100735565B1 (en) Method for detecting an object using structured light and robot using the same
US20180361581A1 (en) Robotic platform with following mode
TWI766410B (en) move robot
CN102960035A (en) Extended fingerprint generation
JP6619967B2 (en) Autonomous mobile device, autonomous mobile system, and environmental map evaluation method
CN110960138A (en) Structured light module and autonomous mobile device
CN111083332B (en) Structured light module, autonomous mobile device and light source distinguishing method
CN112828879B (en) Task management method and device, intelligent robot and medium
CN110974083A (en) Structured light module and autonomous mobile device
US20230123512A1 (en) Robotic cleaning device with dynamic area coverage
KR20200015348A (en) Mobile Robot Setting Boundary of Attribute Block
CN112204345A (en) Indoor positioning method of mobile equipment, mobile equipment and control system
CN112741562A (en) Sweeper control method, sweeper control device, sweeper control equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant