CN116711996A - Operation method, self-mobile device, and storage medium - Google Patents
Operation method, self-mobile device, and storage medium Download PDFInfo
- Publication number
- CN116711996A CN116711996A CN202310706115.1A CN202310706115A CN116711996A CN 116711996 A CN116711996 A CN 116711996A CN 202310706115 A CN202310706115 A CN 202310706115A CN 116711996 A CN116711996 A CN 116711996A
- Authority
- CN
- China
- Prior art keywords
- target object
- self
- target
- structured light
- machine behavior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000004140 cleaning Methods 0.000 claims abstract description 37
- 230000006399 behavior Effects 0.000 claims description 193
- 230000009471 action Effects 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 26
- 230000001133 acceleration Effects 0.000 claims description 25
- 230000015654 memory Effects 0.000 claims description 23
- 230000003287 optical effect Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 17
- 230000004048 modification Effects 0.000 claims description 15
- 238000012986 modification Methods 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 8
- 238000005728 strengthening Methods 0.000 claims description 7
- 241001465754 Metazoa Species 0.000 claims description 3
- 230000003542 behavioural effect Effects 0.000 claims description 3
- 238000010408 sweeping Methods 0.000 abstract description 50
- 230000007613 environmental effect Effects 0.000 abstract description 5
- 230000000007 visual effect Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 239000013598 vector Substances 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 6
- 210000002268 wool Anatomy 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000005096 rolling process Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000002787 reinforcement Effects 0.000 description 3
- 238000004804 winding Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000001680 brushing effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/24—Floor-sweeping machines, motor-driven
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4002—Installations of electric equipment
- A47L11/4008—Arrangements of switches, indicators or the like
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4011—Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4061—Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
- A47L2201/04—Automatic control of the travelling movement; Automatic obstacle detection
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L2201/00—Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
- A47L2201/06—Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
The embodiment of the application provides a working method, self-mobile equipment and a storage medium. In the embodiment of the application, more abundant environmental data is fully utilized and combined with the structured light module on the self-mobile device, different objects in the operation environment are identified in type, and then different machine behavior modes are adopted for executing the operation tasks aiming at the different types of objects, so that the operation tasks can be executed more purposefully and finely, the operation time can be shortened, the operation capability can be improved, and the use experience of users can be improved. In particular to a sweeping robot, different machine behavior modes can be adopted for executing cleaning tasks aiming at different types of objects, so that more targeted, fine and purposeful cleaning operations are realized, the cleaning time can be shortened, the cleaning capability can be improved, and the use experience of a user is improved.
Description
Statement
The application has the application number as follows: 202110944997.6, the original application date is: 2021.08.17 the application is named: a method of operation, a self-mobile device, and a storage medium.
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to an operation method, a self-mobile device, and a storage medium.
Background
Along with the continuous improvement of the living standard of people, more and more intelligent electric appliances enter the life of people. Meanwhile, many households begin to utilize intelligent sweeping robots to finish cleaning the household environment. Meanwhile, the development of AI technology and the improvement of the performance of various sensors are continuously advanced and perfected, the function of the sweeping robot is gradually developed from the initial Random sweeping (Random) age to the intelligent sweeping (Smart Move) age capable of planning sweeping, and the sweeping mode is also continuously advanced.
Although the sweeping mode of the sweeping robot is continuously improved, the sweeping robot can only adopt a simpler planning mode to finish the sweeping and traversing of the whole home environment, so that the more detailed sweeping requirements in the home environment cannot be met, the cleaning time is longer, and the use experience of a user is poor.
Disclosure of Invention
Aspects of the present application provide a method, a self-mobile device, and a storage medium for satisfying more detailed operation requirements.
The embodiment of the application provides an operation method, which is suitable for self-moving equipment with a structured light module, and comprises the following steps:
Respectively acquiring structured light data and image data in a front working area by using a structured light assembly and a visual sensor in a structured light module; identifying a target object category existing in the front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category; with the aid of the structured light data, the self-moving device is controlled to execute the task for the target object existing in the front working area according to the target machine behavior mode.
The embodiment of the application also provides self-mobile equipment, which comprises: the device comprises a device body, wherein the device body is provided with one or more memories, one or more processors and a structured light module; the structural light module comprises: a structured light assembly and a vision sensor;
one or more memories for storing computer programs; one or more processors configured to execute a computer program to:
respectively acquiring structured light data and image data in a front working area by using a structured light assembly and a visual sensor in a structured light module; identifying a target object category existing in the front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category; with the aid of the structured light data, the self-moving device is controlled to execute the task for the target object existing in the front working area according to the target machine behavior mode.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the working method embodiments of the self-mobile device provided by the embodiments of the present application.
In the embodiment of the application, more abundant environmental data is fully utilized and combined with the structured light module on the self-mobile device, different objects in the operation environment are identified in type, and then different machine behavior modes are adopted for executing the operation tasks aiming at the different types of objects, so that the operation tasks can be executed more purposefully and finely, the operation time can be shortened, the operation capability can be improved, and the use experience of users can be improved. In particular to a sweeping robot, different machine behavior modes can be adopted for executing cleaning tasks aiming at different types of objects, so that more targeted, fine and purposeful cleaning operations are realized, the cleaning time can be shortened, the cleaning capability can be improved, and the use experience of a user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a scenario in which a self-mobile device according to an exemplary embodiment of the present application performs an operation using a structured light module;
fig. 2 is a schematic structural diagram of a structural optical module according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of another optical module according to an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method of operation according to an exemplary embodiment of the present application;
fig. 5 is a schematic view of a scenario during operation of a sweeping robot according to an exemplary embodiment of the present application;
fig. 6 is a family pattern diagram of a home environment provided in an exemplary embodiment of the present application;
fig. 7 is a schematic structural diagram of a self-mobile device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The existing sweeping robot cannot meet the careful sweeping demands of people in a household environment, and also has no pertinence to the complicated and fine household environment, and the cleaning task is executed according to local conditions, so that the cleaning time is long, the efficiency is low, and the use experience of users is poor. Aiming at the technical problem, in the embodiment of the application, more abundant environmental data is fully utilized and combined with the structured light module on the mobile equipment, different objects in the operation environment are identified in type, and then different machine behavior modes are adopted for executing the operation tasks aiming at the different types of objects, so that the operation tasks can be executed more pertinently, finely and purposefully, the operation time can be shortened, the operation capability can be improved, and the use experience of users can be improved. In particular to a sweeping robot, different machine behavior modes can be adopted for executing cleaning tasks aiming at different types of objects, so that more targeted, fine and purposeful cleaning operations are realized, the cleaning time can be shortened, the cleaning capability can be improved, and the use experience of a user is improved.
As described herein, various methods provided by embodiments of the present application may be implemented by a self-mobile device. In the embodiment of the application, the self-mobile device may be any mechanical device capable of performing spatial movement in the environment where the self-mobile device is located, for example, a robot, a purifier, an unmanned vehicle, etc. The robot may include a sweeping robot, a accompanying robot, a guiding robot, or the like. The explanation of "self-mobile device" is applicable to all embodiments of the present application, and will not be repeated in the following embodiments.
Before explaining various methods provided by the embodiment of the application in detail, a structural optical module which can be adopted by the self-mobile device is explained. In embodiments of the present application, a structured light module is installed from a mobile device. The structured light module used in the embodiments of the present application generally refers to any structured light module that includes a structured light assembly and a vision sensor.
The line laser transmitter 102 is used for transmitting visible or invisible line laser outwards, and the laser camera 101 is responsible for collecting laser images of environments detected by the line laser. Specifically, after the line laser emitted from the line laser emitter 102 encounters an object in the environment, a laser stripe is formed on the object, and the laser camera 101 collects a laser image including the laser stripe within its field of view. The three-dimensional point cloud data, the contour, the height, the width, the depth, the length and other information of the object in the view angle of the laser camera 101 can be detected from the laser image by utilizing the triangle ranging principle and the coordinate transformation relation among the laser image of the laser stripe in the laser image, the coordinate system of the laser camera 101, the equipment coordinate system of the self-moving equipment and the world coordinate system.
In fig. 1, the self-moving device moves on a working surface (such as a ground surface, a table surface, and a glass surface) in the forward direction, and emits a line laser light outward by a line laser emitter 102, and if the line laser light encounters an object in a front working area, a laser stripe is formed on the object, and at this time, a laser image including the laser stripe is collected by a laser camera 101. According to the position of the laser stripe in the laser image, the principle of triangulation ranging, the coordinate transformation relation among the coordinate system of the laser camera 101, the equipment coordinate system of the self-moving equipment and the world coordinate system, the height h of each position point on the object corresponding to the laser stripe (namely, the distance between the position point on the object and the working surface), the depth s of each position point (namely, the distance between the position point on the object and the self-moving equipment), the three-dimensional point cloud data of each position point, the width b of the object (the width direction is perpendicular to the advancing direction) and the length a of the object (the length direction is parallel to the advancing direction) are calculated. After three-dimensional point cloud data of a plurality of position points on the object is acquired, contour information of the object can be determined by analyzing the three-dimensional point cloud data.
The vision sensor 103 may be a vision camera capable of capturing visible light images, including, but not limited to, a monocular RGB camera, a binocular RGB camera, and the like, for example. Further optionally, the optical filter of the vision sensor 103 cannot penetrate the reflected light of the line laser transmitter 102 reflected by the object, so as to ensure that the vision sensor 103 can collect a visible light image that does not include the laser stripe generated by the line laser after encountering the object, such as the visible light image shown in fig. 1, and further ensure the quality of the image data collected by the vision sensor 103.
It is noted that the above structured light module can detect three-dimensional point cloud data, contour, height, width, depth, length and other information of an object through the structured light module; the visual sensor 103 can sense information such as color features, texture features, shape features, spatial relationship features and the like of the object, so that more abundant environmental information is sensed, and the intelligent degree of the self-mobile device is facilitated to be improved.
The following is a brief description of the structure and operation of several optical modules with structures that may be used in the embodiments of the present application, with reference to fig. 2-3. Those skilled in the art will appreciate that the following examples of structured light modules are merely illustrative, and the structured light modules that may be used in embodiments of the present application are not limited to these examples.
As shown in fig. 2 to 3, a structured light module mainly includes: structured light assembly and vision assembly. The structured light assembly comprises a laser camera 101 and line laser transmitters 102 distributed on two sides of the laser camera 101. The vision component includes a vision sensor 103. Wherein the structured light assembly or the visual assembly may be controlled by a controller internal to the structured light assembly or by an external controller. The controller inside the structured light module is referred to as the module controller 104 for ease of understanding. The module controller 104 is shown in phantom in fig. 2, illustrating the module controller 104 as an optional component. When the structured light module is applied to the autonomous mobile apparatus, all or part of the components in the structured light module may operate under the control of the master controller 106 of the autonomous mobile apparatus. For ease of understanding, description will be given taking as an example that the structured light assembly operates under the control of the module controller 104 and the vision assembly operates under the control of the main controller 106.
The line laser emitter 102 may be installed above, below, left or right of the laser camera 101, as long as the line laser emitted by the line laser emitter 102 is located within the field of view of the laser camera 101. In fig. 2 and 3, the line laser transmitter 102 is illustrated as being mounted on both left and right sides of the laser camera 101. As shown in fig. 1, in the structured light module, a laser stripe formed by striking an obstacle or a ground surface with a laser surface emitted from a line laser emitter 102 is horizontal to the ground in front and perpendicular to the advancing direction of the self-moving device. This mounting may be referred to as horizontal mounting. Fig. 1 is a schematic diagram showing an installation state and an application state of a structured light module on a self-mobile device.
As shown in fig. 1, during the forward travel from the mobile device, the controllable structural optical module operates in a certain manner, for example, environment detection is performed periodically (every 20 ms), so as to obtain a set of laser image data, each laser image data includes laser stripes formed by striking a line laser on the surface or the ground of an object, one laser stripe includes a plurality of three-dimensional data, and three-dimensional data on the laser stripes in a large number of laser images can form three-dimensional point cloud data.
Alternatively, the module controller 104 may control exposure of the laser camera 101 on the one hand, and may control the line laser emitter 102 to emit line laser light during exposure of the laser camera 101 on the other hand, so that the laser camera 101 collects a laser image detected by the line laser light. The module controller 104 may control the line laser transmitters 102 located at two sides of the laser camera 101 to operate simultaneously or alternatively, which is not limited. The embodiment of the present application is not limited to the implementation of the module controller 104, and may be, for example, but not limited to, a processor such as a CPU, GPU, or MCU. The mode of controlling the structured light module by the module controller 104 is not limited in the embodiments of the present application. All the implementation modes capable of realizing the functions of the structural optical module are applicable to the embodiment of the application.
Specifically, the module controller 104 may control the exposure frequency, the exposure duration, the working frequency, and the like of the laser camera 101. The laser camera 101 collects laser images detected by the line laser during the line laser light emitted from the line laser emitter 102 under the control of the module controller 104. Based on the laser image collected by the laser camera 101, the distance (i.e., depth information) from the device where the structural optical module or the structural optical module is located to the front object (e.g., an obstacle) can be calculated, and information such as three-dimensional point cloud data, contours, shapes, heights, widths, volumes and the like of the front object (e.g., the obstacle) can be calculated, and further, three-dimensional reconstruction and the like can be performed. The distance between the laser camera 101 and the object in front of the laser camera can be calculated through trigonometric functions by utilizing the principle of a laser triangulation method.
In the present embodiment, the implementation form of the line laser transmitter 102 is not limited, and may be any device/product form capable of transmitting line laser light. For example, the line laser transmitter 102 may be, but is not limited to: a laser tube. In the present embodiment, the wavelength of the line laser light emitted from the line laser emitter 102 is not limited, and the line laser light may have different colors, for example, red laser light, violet laser light, or the like. The line laser may be visible light or invisible light.
In the present embodiment, the implementation form of the laser camera 101 is not limited. Any visual type device that can collect a laser image of the environment detected by the line laser emitted by the line laser emitter 102 is suitable for the embodiments of the present application. For example, the laser camera 101 may employ a camera capable of capturing the line laser light emitted from the line laser emitter 102. The laser camera 101 may be, for example, an infrared camera, an ultraviolet camera, a starlight camera, a high-definition camera, a 2D vision camera with red laser light, a 2D vision camera with violet laser light, or the like, which is adapted to the wavelength of the line laser light emitted by the line laser emitter 102. The laser camera 101 may collect laser images within its field of view. The angle of view of the laser camera 101 includes a vertical angle of view, a horizontal angle of view, and a diagonal angle of view. In the present embodiment, the angle of view of the laser camera 101 is not limited, and the laser camera 101 having an appropriate angle of view may be selected according to the application requirements. Optionally, the horizontal field angle of the laser camera 101 is 100.6 °; alternatively, the vertical field angle of the laser camera 101 is 74.7 °; alternatively, the diagonal field angle of the laser camera 101 is 133.7 °.
In this embodiment, as long as the line laser emitted by the line laser emitter 102 is located within the field of view of the laser camera 101, the angle between the laser stripe formed by the line laser on the surface of the object and the horizontal plane is not limited, for example, the line laser may be parallel or perpendicular to the horizontal plane, and may form any angle with the horizontal plane, which may be specifically determined according to the application requirement.
In the present embodiment, the implementation form of the vision sensor 103 is not limited. All kinds of visual equipment capable of collecting visible light images are suitable for the embodiment of the application. The visible light image can present the characteristics of the objects in the environment, such as color characteristics, texture characteristics, shape characteristics, spatial relationship characteristics and the like, and can help to identify the information of the types, materials and the like of the objects. In an embodiment of the present application, the visual sensor 103 captures an image of the environment within its field of view as a visible light image. Among other things, the visual sensor 103 may include, but is not limited to: monocular RGB cameras, binocular RGB cameras, and the like. The monocular RGB camera comprises an RGB camera, the binocular RGB camera comprises two RGB cameras, and the RGB cameras are 2D vision cameras capable of collecting RGB images. The laser camera 101 may acquire an image of the environment within its field of view. The view angle of the vision sensor 103 includes a vertical view angle, a horizontal view angle, and a diagonal view angle. In the present embodiment, the angle of view of the vision sensor 103 is not limited, and the vision sensor 103 having an appropriate angle of view may be selected according to the application requirements. Optionally, the visual sensor 103 has a horizontal field angle of 148.3 °; alternatively, the vertical field angle of the vision sensor 103 is 125.8 °; alternatively, the visual sensor 103 has a diagonal field angle of 148.3 °.
It should be appreciated that the optical filters of the RGB camera cannot penetrate the reflected light of the line laser transmitter 102 that externally emits line laser light that is reflected back by the object. Thus, the RGB camera may capture a visible light image that does not contain line laser light that produces laser stripes upon encountering an object. It will be appreciated that the image of the environment within the field of view acquired by the vision sensor 103 is a visible light image that does not include laser stripes.
Optionally, the vision sensor 103 operates under the control of the main controller 106. For example, the main controller 106 may control the exposure frequency, the exposure time period, the operating frequency, etc. of the vision sensor 103.
Further optionally, referring to fig. 2 and 3, the visual component in the structured light module may further include an indicator light 105, where the on/off state of the indicator light 105 indicates the operation state of the visual sensor 103. For example, the indication light 105 is lit to indicate that the visual sensor 103 is in an active state. The indicator light 105 is turned off, indicating that the vision sensor 103 is in an off state. Alternatively, the indicator light 105 operates under the control of the module controller 104. The module controller 104 may interact with the main controller 106 to obtain the operation state of the vision sensor 103 sent by the main controller 106, and control the on/off state of the indicator lamp 105 based on the operation state of the vision sensor 103.
Further optionally, in order to improve the intelligence of the structural optical module, the module controller 104 may control the image capturing operation of the structural optical module and the visual module, and undertake the data processing operation on the captured laser image data and the visible light image data of the structural optical module and the visual module.
Further optionally, in order to reduce the data processing amount of the structural optical module and improve the image acquisition efficiency of the structural optical module, the main controller 106 performs data processing on the acquired laser image data and visible light image data of the structural optical component and the visual component. In this case, the structured light module transmits the laser image data collected by the structured light module to the main controller 106 through the module controller 104, and at the same time, the main controller 106 acquires the visible light image data collected by the vision module. The main controller 106 may analyze, among other things, laser image data, object three-dimensional point cloud data, contours, shapes, heights and/or widths, volumes, etc. The main controller 106 may also analyze the visible light image data to identify information such as color features, texture features, shape features, spatial relationship features, types, materials, etc. of the object.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 4 is a flowchart of a method of operation provided in an exemplary embodiment of the present application. The method is suitable for the self-mobile equipment, and the self-mobile equipment is provided with the structured light module. For the description of the structured light module, please refer to the foregoing. As shown in fig. 1, the method comprises the steps of:
401. and respectively acquiring the structured light data and the image data in the front working area by using a structured light assembly and a vision sensor in the structured light module.
402. A target object class present in the front work area is identified based on the image data, and a target machine behavior pattern adapted to the target object class is selected.
403. With the aid of the structured light data, the self-moving device is controlled to execute the task for the target object existing in the front working area according to the target machine behavior mode.
In the embodiment of the application, in the operation process of the self-mobile equipment, the structural light module can be utilized to detect the environment information of the front operation area. The front operation area refers to a range which can be identified by the self-moving equipment along the advancing direction in the operation process of the self-moving equipment, the environment information of the front operation area can change along with the advancing of the self-moving equipment, and the environment information of the front operation area of the self-moving equipment is different in different operation areas. Specifically, the structured light component is used for collecting structured light data in the front working area, namely, after the on-line laser transmitter transmits line laser to the front working area, the laser camera collects laser image data of the environment detected by the line laser. Meanwhile, image data in a front working area, which is visible light image data, is acquired by a vision sensor.
After the structured light data and the image data in the front working area are acquired, first, whether or not there is an object in the front working area and the category to which the object belongs are identified based on the image data.
It is noted that in embodiments of the present application, the object classification is to classify the object from the perspective of its impact on the operation of the self-moving device. For example, object categories can be broadly divided into: easy to be trapped, easy to be wound, easy to be dirty, movable, etc., but is not limited to the above category. It should be understood that the trapped objects refer to some objects present in the work environment that tend to cause the self-moving device to become trapped and stuck; the easily-wound objects refer to some objects which exist in a working environment and are easy to wind from mobile equipment; the objects easy to be polluted are some objects which exist in the working environment and easily cause the areas where the objects are positioned to be polluted; a movable object refers to some movable object present in the work environment, which may interfere with the normal travel of the self-moving device on the one hand, and with the performance of the work task by the self-moving device on the other hand, for example, the place occupied by the movable object will not be cleaned by the self-moving device, requiring some special handling.
Taking the home environment as an example, assume that the following objects are present in the home environment: garbage bin, charging seat, shoes, bowl basin, U type chair, bar counter chair, sliding door rail, clothes, carpet edge, electric wire, people, animal etc.. Among these objects, some belong to the class of easy-to-catch, some belong to the class of easy-to-wind, some belong to the class of easy-to-soil, and some belong to the movable object. For example, the easy-to-catch class of objects includes, but is not limited to: u-shaped chairs, bar counter chairs and sliding door sliding rails. Objects of the winding-prone class include, but are not limited to: clothing, carpet edges, wires. Objects of the fouling susceptible class include, but are not limited to: garbage bin, charging seat, shoes, bowl basin. For another example, movable objects include, but are not limited to: humans, animals, and the like.
In the embodiment of the application, the category to which the object existing in the front working area identified based on the image data in the front working area acquired by the vision sensor in the structured light module belongs is referred to as a target object category. The target object class may include any one or more of the several object classes listed above, without limitation. In addition, in the embodiment of the present application, the manner of identifying the type of the target object existing in the front working area based on the image data acquired by the vision sensor in the structured light module is not limited. The following illustrates an embodiment for identifying a target object class:
Alternatively, the self-mobile device may use an AI (Artificial Intelligence ) algorithm to perform object recognition on the image data collected by the vision sensor in the structural light module, so as to obtain a target object category existing in the working area in front of the self-mobile device. Optionally, the AI-recognition result includes which object in the work environment the object is, and the category to which the object belongs. The object recognition method for the image data collected by the vision sensor in the structural optical module based on the AI algorithm can specifically be as follows: and carrying out object recognition on the image data acquired by the vision sensor in the structural optical module by adopting a pre-trained neural network model. Specifically, a large number of sample object images can be prepared in advance, object types of objects in the sample object images are marked, model training is performed according to the sample object images and marking results thereof, an image recognition model capable of recognizing the object types is obtained, and the image recognition model is built in the self-mobile device. After the vision sensor in the structured light module collects the image data in the working area in front of the self-moving equipment, the self-moving equipment can use the image recognition model to perform object recognition on the image data, so that the target object type existing in the working area in front of the self-moving equipment is obtained. Among them, the network structure of the image recognition model includes, but is not limited to: CNN (Convolutional Neural Networks, convolutional neural network), RNN (Recurrent Neural Network ), LSTM (Long Short-Term Memory artificial neural network).
Further optionally, the image recognition model adopted in the embodiment of the present application includes a feature extraction network and a classification neural network. One implementation of the image recognition model to recognize the object class present in the work area in front of the mobile device based on the image data is: inputting the image data into a feature extraction network, generating at least one candidate frame on the image data, and carrying out pooling processing on a feature map corresponding to each candidate frame to obtain a first feature vector; further, a valid candidate frame is selected from at least one candidate frame based on the first feature vector corresponding to each candidate frame; the effective candidate frame refers to a candidate frame which contains an object in the image area; inputting the images in the effective candidate frames into a classification neural network, and extracting the characteristics of the images in the effective candidate frames to obtain second characteristic vectors; and identifying the object category of the object in the effective candidate frame based on the second feature vector. For example, the second feature vector may be matched with a feature vector corresponding to a known object in the feature library maintained in advance, and a category of the feature library to which the known object corresponding to the feature vector in the second feature vector matching belongs may be used as a category of the object in the effective candidate frame.
Optionally, the association relation between the known object and the object class is maintained in the feature library. Wherein a known object may refer to an object whose belonging object category has been confirmed.
In the above or below embodiments of the present application, after identifying the target object existing in the front working area and the target object category to which the target object belongs from the mobile device, the identified target object and the target object category to which the target object belongs may also be updated into the map area corresponding to the target object in the environment map.
Further optionally, in the embodiment of the present application, there may be some objects and object categories to which the objects are not recognized by the AI algorithm in the working environment, and for these objects and object categories to which they belong, the user is allowed to add information such as these objects and object categories to which they belong to the environment map. Specifically, the user can view the environment map on the display screen of the terminal device bound to the self-mobile device or on the display screen of the self-mobile device, and compare the existing object and the object category thereof in the environment map with the object and the object category thereof actually existing in the working environment of the self-mobile device; if the object actually existing in the working environment and the object category to which the object belongs are not in accordance with the object already recorded in the environment map and the object category to which the object belongs, the user can update the environment map so that the environment map more accurately reflects the information of the object actually existing in the working environment and the object category to which the object belongs, and the environment map is more in accordance with the working environment. It should be appreciated that an environment map that is more compatible with the work environment can help to improve the self-mobile device's ability to more accurately perceive information about objects present in the work environment, which is beneficial to improving the work performance of the mobile device.
The situations that the object actually existing in the working environment and the object category to which the object belongs do not conform to the object already recorded in the environment map and the object category to which the object belongs include the following:
one of the situations is: some objects actually exist in the working environment and the object category to which the objects belong do not appear in the environment map;
second case: some objects actually exist in the working environment, and the category of the objects is inconsistent with the marked information in the environment map.
For ease of understanding and distinction, an object that is actually present in the work environment but that does not appear in the environment map is referred to as a first object. The user can add information such as the first object and the object category to which the first object belongs in the environment map according to the position information of the first object in the working environment.
For ease of understanding and distinction, an object that is actually present in the work environment but marked with an error in the environment map is referred to as a second object. The user may modify the relevant information of the second object in the environment map to fit with its real information.
Further, in order to meet the modification requirement of the user on the known object class, the self-mobile device may further display the known object class when receiving a modification request of the user on the known object class, and respond to a first modification operation initiated on the known object class to acquire the modified object class; object classes are known to be user-set on the environment map and/or identified from the mobile device based on historical image data. Wherein the first modifying operation comprises at least one of: modifying the name of the object class, adjusting the object corresponding to the object class, and deleting the known object class.
Wherein, for the modification operation of modifying the name of the object class, the object of the object class before modification becomes the object under the modified object class. For example, the object class of the shoe is modified from a soil-prone class to a wind-prone class.
Wherein, for the modification operation of the object corresponding to the adjusted object category, the object category of the adjusted object is changed. For example, objects in the easy-to-catch category change from U-chairs, bar chairs, sliding door tracks to U-chairs and bar chairs, i.e., the sliding door tracks are removed from the easy-to-catch category.
Wherein for a modification operation to delete a known object class, then objects under the deleted known object class will not subsequently be identified as deleted known object classes.
When the object actually existing in the work environment and the object category to which the object belongs do not match the object already described in the environment map and the object category to which the object belongs, the user can set the correct object and the object category to which the object belongs in the environment map, and the user sets the object category to be the known object category in the environment map.
It should be noted that if the known object class information is known to be modified from the mobile device, then the target object class in the working area in front of the mobile device may be subsequently identified based on the image data acquired by the vision sensor in combination with the modified known object class information.
After identifying the target object class present in the front work area based on the image data, a target machine behavior pattern adapted to the target object class is selected. It should be appreciated that the target machine behavior pattern adapted to the target object class is a target machine behavior pattern selected for the self-mobile device based on the target object class in a targeted manner. When the self-mobile device executes a job task according to the target machine behavior mode with respect to a target object existing in a front job area, the self-mobile device is less affected by the target object. The following describes the target machine behavior pattern adapted to the target object class:
case 1: for trapped objects, the target machine behavior pattern may be an obstacle avoidance behavior pattern or an acceleration traffic behavior pattern. If the self-mobile device encounters an easy-to-catch object in the process of executing the operation task, if Yi Ka is not passable, the self-mobile device avoids the easy-to-catch object according to the obstacle avoidance behavior mode; if Yi Ka trapped objects can pass, the self-mobile equipment rapidly passes through the inside of the trapped objects according to an acceleration passing behavior mode; so as to reduce the probability of being trapped by the easy-to-be-stuck object to be stuck.
Taking the sweeping robot as an example, if the sweeping robot encounters an unclassivable object such as a U-shaped chair or a bar counter chair in the process of executing the sweeping task, the sweeping robot gives up sweeping the peripheral area of the U-shaped chair or the bar counter chair and avoids the unclassivable object such as the U-shaped chair or the bar counter chair according to the obstacle avoidance behavior mode.
If the sweeping robot encounters a passable easy-to-catch object such as a sliding door sliding rail in the process of executing the sweeping task, the sweeping robot gives up sweeping the peripheral area of the sliding door sliding rail and accelerates to pass through the easy-to-catch object such as the sliding door sliding rail according to an acceleration passing behavior mode.
Case 2: for easily wrapped objects, the target machine behavior pattern may be a deceleration work behavior pattern. At this time, the self-moving device slows down the operation speed in the operation process according to the deceleration operation behavior mode, so as to reduce the probability of being wound by the object easy to wind.
For example, if the sweeping robot encounters an object which is easy to wind, such as clothes, carpet edges, wires, etc., during the process of executing the sweeping task, the sweeping robot can properly close the side brush or the rolling brush according to the deceleration operation behavior mode, or slow down the rotation speed of the side brush, that is, stop the sweeping operation or slow down the sweeping operation. When the robot is far away from the object easy to be wound, the robot is restored to the normal cleaning state.
Case 3: for soiled objects, the target machine behavior pattern may be an enhanced work behavior pattern. At this time, the self-mobile device performs reinforcement processing on the easily-polluted object by improving the working capacity according to the reinforcement working behavior mode.
For example, if the sweeping robot encounters objects which are easy to be polluted, such as a garbage can, a charging seat, shoes, bowls and basin, and the like, in the process of executing the sweeping task, the sweeping robot strengthens the cleaning force for the surroundings according to the strengthening operation behavior mode. In practical application, the robot can accelerate the rotation speed of the side brush and the rolling brush and strengthen the suction of the fan. Meanwhile, the sweeping robot may perform repeated sweeping around such objects or perform multi-turn cleaning.
Case 4: for movable objects, the target machine behavior pattern may be a voice-prompted behavior pattern. The voice prompt behavior mode can realize interaction between the self-mobile device and the movable object, and prompt the movable object to avoid the area where the self-mobile device needs to execute the operation task.
For example, if the sweeping robot encounters a person during the process of executing the sweeping task, the sweeping robot prompts the person to leave the current position according to the voice prompt behavior mode, or lifts the feet, so that the sweeping robot can complete the cleaning task of the area occupied by the feet of the person.
In the above or below embodiments of the present application, in order to accurately and quickly select a target machine behavior pattern adapted to a target object class, a known object class and a machine behavior pattern corresponding thereto may be stored in association in advance, so that the self-mobile device may query the known object class and the machine behavior pattern corresponding thereto according to the target object class to obtain the machine behavior pattern corresponding to the target object class as the target machine behavior pattern.
In the above or below embodiments of the present application, the known object type and its corresponding machine behavior pattern may be set by the self-mobile device or by the user, which is not limited thereto.
Further optionally, the method can also support the user to modify the machine behavior mode corresponding to the known object category according to the actual application requirement. For example, the machine behavior pattern includes at least modifications of behavior parameters and behavior actions required to perform a job task from the mobile device. Among the behavioral parameters include, but are not limited to: the operation times, the suction force of the fan, the rotating speed of the side brush, the distance value and the direction angle of the relative target object when the action is executed, and the like. The behavior action includes, for example, an acceleration action, a deceleration action, an obstacle avoidance action, a reinforcement action, and a voice prompt action.
Thus, in the above or below embodiments of the application, the above method further comprises: displaying a machine behavior mode corresponding to the known object type, responding to a second modification operation initiated for the machine behavior mode, and acquiring a modified machine behavior mode; wherein the second modifying operation comprises at least one of: modifying existing behavior parameters, adding new behavior parameters, deleting existing behavior parameters, modifying existing machine action parameters, adding new machine action parameters, and deleting existing machine action parameters.
In order to improve the working performance of the self-mobile device, the self-mobile device can be controlled to execute working tasks for the target objects existing in the front working area according to the target machine behavior mode with the aid of the structured light data. For example, the structured light data may detect three-dimensional point cloud data, contour, shape, height, width, depth (i.e., distance of the object from the mobile device), length, thickness, volume, and other information of the object, and by combining the structured light data, the operation performance of the mobile device may be improved.
According to the operation method provided by the embodiment of the application, more abundant environmental data is fully utilized and combined with the structured light module on the mobile equipment, different objects in the operation environment are identified in type, and then different machine behavior modes are adopted for executing operation tasks aiming at the different types of objects, so that the operation tasks can be executed more pertinently, finely and purposefully, the operation time can be shortened, the operation capability can be improved, and the use experience of users can be improved. In particular to a sweeping robot, different machine behavior modes can be adopted for executing cleaning tasks aiming at different types of objects, so that more targeted, fine and purposeful cleaning operations are realized, the cleaning time can be shortened, the cleaning capability can be improved, and the use experience of a user is improved.
In the above or below embodiments of the present application, in order to improve the accuracy of identifying the object class existing in the work area in front of the mobile device, the target object class identified based on the image data may also be corrected in combination with the structured light data before selecting the target machine behavior pattern adapted to the target object class. For example, in connection with structured light data, it is identified whether the height, width, length, or volume of the object matches the target object class. For another example, in consideration of specific similarity of the contours of the objects in the same object class, the contours of the objects may be recognized in combination with the structured light data, and the target object class may be corrected based on the contour information.
Thus, in the above or below embodiments of the present application, before selecting the target machine behavior pattern adapted to the target object class, the above method further comprises: identifying a target object profile present in the front working area based on the structured light data; and correcting the category of the target object according to the profile of the target object.
For example, when the contour of the target object existing in the front working area is identified based on the structured light data, three-dimensional point cloud data of the target object may be acquired based on the structured light data, three-dimensional reconstruction may be performed on the target object based on the three-dimensional point cloud data of the target object, and contour feature extraction may be performed on the target object obtained by the three-dimensional reconstruction, so as to acquire the contour of the target object.
In some alternative embodiments of the present application, the contour features of objects belonging to any object class may be extracted in advance. If the target object profile matches the profile features of objects belonging to the target object class, no correction of the target object class is required. If the outline of the target object is not matched with the outline characteristics of the object belonging to the category of the target object, taking the category of the object corresponding to the outline of the target object as a reference object category, and correcting the category of the target object according to the reference object category; wherein the different object categories have non-identical object contours. In some alternative embodiments of the present application, one implementation of correcting the target object class based on the reference object class is: under the condition that the difference degree between the target object class and the reference object class is smaller than a set threshold value, directly correcting the target object class into the reference object class; alternatively, in the case where the degree of difference between the target object class and the reference object class is greater than or equal to the set threshold, an intermediate object class that can be excessive between the target object class and the reference object class is determined, and the target object class is corrected to the intermediate object class.
In some alternative embodiments of the present application, one implementation of correcting the target object class according to the target object profile is: and carrying out finer granularity division on the target object category according to the target object outline so as to obtain the sub-category under the target object category. For example, for the easy-to-get-stuck category, there are not only non-hollowed-out objects such as sliding door rails, but also hollowed-out objects such as U-chairs and bar chairs. When the self-moving equipment encounters a non-hollowed-out object, the self-moving equipment can accelerate to quickly pass through the non-hollowed-out object so as to avoid being trapped and blocked by the non-hollowed-out object. When the self-mobile device encounters the hollowed-out object, the object type of the hollowed-out object can be thinned, and whether the hollowed-out object can pass or not is identified.
Thus, in some alternative embodiments of the present application, the classification of the target object into sub-categories under the target object category is performed by performing a more detailed classification of the target object category according to the target object profile: under the condition that the target object type is the easy-to-clamp type, combining the target object outline to determine whether the target object corresponding to the target object outline is a hollowed-out object or not; under the condition that the target object is a hollowed-out object, combining the hollowed-out width of the target object and the body width of the self-moving equipment, dividing the target object into two categories of easy-to-clamp and non-passable and easy-to-clamp and passable.
For example, whether the target object is a hollowed-out object, and belongs to a class which is easy to be trapped and can not be passed through or a class which is easy to be trapped and can be passed through can be identified according to the height information of a plurality of position points on the lower edge of the working surface (such as ground, desktop and glass surface) in the outline of the target object and corresponding transverse distance information.
Further alternatively, it is first identified whether there are a plurality of location points above the work surface on the lower edge of the target object's contour proximate the work surface (e.g., ground, table top, and glass surface). If a plurality of position points higher than the working surface exist, determining that the target object is a hollowed-out object. When the target object is a hollowed-out object, identifying whether a plurality of position points with continuous hollowed-out width larger than the width of the machine body and the height larger than the machine body exist on the lower edge of the target object, and if a plurality of position points with continuous hollowed-out width larger than the width of the machine body and the height larger than the height of the machine body exist, classifying the target object into the categories which are easy to be trapped and pass. If a plurality of position points with continuous hollowed-out width larger than the machine body width or height larger than the machine body height do not exist, the target object category is divided into easy-to-clamp and non-passable.
The height of the target object is obtained by carrying out average calculation according to the heights of a plurality of continuous position points. The hollowed width refers to transverse distance information corresponding to a plurality of continuous position points, and can be obtained by calculating position coordinates of a first position point and a last position point in the plurality of continuous position points, namely, the distance information between the first position point and the last position point. Referring to the dome-shaped hollowed-out object shown in fig. 5, circles represent a plurality of position points on the dome-shaped hollowed-out object, the heights of the position points from the ground are averaged to obtain the height of the dome-shaped hollowed-out object, if the height of the dome-shaped hollowed-out object is greater than the height of the machine body, the transverse distance information l of the position points is further calculated, and if the transverse distance information l is greater than the width of the machine body, the self-moving device can pass through the inside of the dome-shaped hollowed-out object, namely, the object types of the dome-shaped hollowed-out object are divided into easy-to-clamp and passable. If the transverse distance information l is smaller than or equal to the width of the machine body or the height of the dome-shaped hollowed-out object is smaller than or equal to the height of the machine body, the self-moving equipment cannot pass through the inside of the dome-shaped hollowed-out object, namely the object types of the dome-shaped hollowed-out object are divided into easy-to-clamp and non-passable objects.
In the above or below embodiments of the present application, with the aid of the structured light data, one implementation procedure for controlling the execution of a job task by the self-mobile device with respect to a target object present in a front job area in accordance with a target machine behavior pattern is: identifying position information and/or profile parameters of a target object present in the front working area based on the structured light data; and controlling the self-mobile equipment to execute the operation task aiming at the target object according to the target machine behavior mode according to the position information and/or the appearance parameters of the target object.
The position information of the target object may be three-dimensional point cloud data of the target object, and the shape parameters include, but are not limited to, contour, height, width, depth, length and the like.
The following describes the implementation process of controlling the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameters of the target object.
Case 1: and under the condition that the target object type is easy to be trapped and can not pass, selecting the obstacle avoidance behavior mode as a target machine behavior mode matched with the target object type. For the target object which is easy to be trapped and can not pass through, besides considering the position information of the target object to judge whether the current distance between the self-moving equipment and the target object is close to the obstacle avoidance distance, at least considering the outline parameter in the outline parameter is also needed to reduce the damage of the self-moving equipment by the outline edge of the target object in the obstacle avoidance process. Accordingly, one implementation procedure for controlling the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object is: and controlling the self-mobile equipment to avoid the obstacle aiming at the target object according to the obstacle avoidance behavior mode based on the position information of the target object and the outline parameters in the outline parameters.
For example, if the U-shaped chair or the bar chair is divided into objects which are easy to be trapped and can not pass through, the obstacle avoidance is started when the current distance between the mobile device and the U-shaped chair or the bar chair approaches the obstacle avoidance distance, and whether the mobile device touches the contour edge of the U-shaped chair or the bar chair or not is monitored in the obstacle avoidance process.
Case 2: and under the condition that the target object class is easy to be trapped and can pass, selecting the acceleration passing behavior mode as a target machine behavior mode matched with the target object class.
If the target object is a non-hollow object, such as a sliding door sliding rail, the self-moving device is controlled to quickly cross the sliding door sliding rail according to the accelerated passing behavior mode.
If the target object is a hollowed-out object, for example, a U-shaped chair or a bar counter chair. Besides considering the position information of the target object to determine whether the current distance between the self-mobile device and the target object approaches the obstacle avoidance distance, at least considering the hollowed-out width and the hollowed-out height in the appearance parameters to enable the self-mobile device to pass out of the area where the target object can pass through, so that collision between the self-mobile device and the target object in the passing process is reduced. Accordingly, one implementation procedure for controlling the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object is: and controlling the self-mobile equipment to pass through the hollowed-out area of the target object according to the acceleration passing behavior mode based on the position information of the target object and the hollowed-out width and height in the appearance parameters so as to continuously execute the operation task.
In some alternative embodiments of the application, the accelerated traffic behavior pattern includes: a first indication parameter indicating an acceleration action and a first execution parameter required for the acceleration action, the first execution parameter including a direction parameter, a distance parameter and a speed parameter. The first indication parameter mainly indicates whether the behavior action to be executed is an acceleration action. The first execution parameter refers to a parameter required to execute the acceleration action, such as at least one of a direction parameter, a distance parameter, and a speed parameter. The distance parameter may be a distance parameter that includes starting the acceleration traffic mode when the self-mobile device is further from the target object, or ending the acceleration traffic mode when the self-mobile device is further from the target object.
If the target object is a non-hollowed object, such as a sliding door sliding rail, when the self-moving device is 15 cm away from the sliding door sliding rail, the target object is in a direction with an included angle of 45 degrees towards the sliding door sliding rail, the target object is accelerated to pass through the sliding door sliding rail at a speed of 30 cm per second, and after the target object leaves the sliding door sliding rail by at least 10 cm, the target object can exit from the acceleration passing mode and move according to the normal speed passing mode.
If the target object is a non-hollowed-out object, for example, a U-shaped chair or a bar counter chair, correspondingly, based on the position information of the target object and the hollowed-out width and height in the profile parameters, one implementation process of controlling the mobile device to pass through the hollowed-out area of the target object according to the acceleration passing mode to continue to execute the operation task is as follows: based on the position information of the target object and the hollowed-out width and height in the appearance parameters, the direction parameters are combined, and the direction of the self-moving equipment is adjusted so that the self-moving equipment faces the hollowed-out area of the target object; and controlling the self-moving equipment to accelerate along the current direction until the self-moving equipment passes through the hollowed-out area of the target object according to the distance parameter and the speed parameter.
It should be understood that the hollowed-out area of the target object that the self-mobile device is directed to refers to a hollowed-out area that the self-mobile device can pass through. If the target object is a U-shaped chair or a bar chair, when the self-moving equipment is 15 cm away from the U-shaped chair or the bar chair, the target object is accelerated to pass through a traversable hollow area of the U-shaped chair or the bar chair at a speed of 30 cm per second in a direction of forming an included angle of 45 degrees towards the U-shaped chair or the bar chair, and can exit from an acceleration passing mode and move according to a normal speed passing mode after leaving the U-shaped chair or the bar chair for at least 10 cm.
Case 3: under the condition that the target object type is easy to wind, selecting a target machine behavior mode in which a deceleration operation behavior mode is matched with the target object type, and correspondingly, controlling the self-mobile device to execute an operation task for the target object according to the target machine behavior mode according to the position information and/or the appearance parameters of the target object, wherein one implementation process is as follows: and controlling the self-moving equipment to execute the operation task aiming at the target object according to the deceleration operation behavior mode based on the outline edge position in the outline parameters of the target object.
For example, an easily-wound object such as clothes, electric wires, carpets and the like can be identified based on the structured light data, and the self-moving device such as a sweeping robot can be decelerated to reduce the probability of a condition of missing sweeping in the surrounding area of the object on the premise of not winding the easily-wound object based on the edge position of the outline of the object.
In some alternative embodiments of the application, the deceleration operation behavior pattern includes: and the second execution parameters at least comprise an obstacle avoidance distance and a first side brush rotating speed smaller than a rotating speed threshold value. The rotating speed threshold and the first side brushing rotating speed are set according to actual application requirements.
If the target object is an easily-wound object such as clothes or wires which can only work in the peripheral area and cannot work on the upper surface of the target object, the self-moving device can drive the side brush of the target object to execute the cleaning task according to the first side brush rotating speed control in the peripheral area which is larger than the obstacle avoidance distance from the target object based on the second indication parameter and the second execution parameter.
If the target object is an easily entangled object that needs to be worked not only in its peripheral area, such as a carpet, but also on its upper surface (i.e., worked above the target object), accordingly, based on the contour edge position in the profile parameters of the target object, one implementation procedure for controlling the self-moving device to perform the working task on the target object according to the deceleration working behavior pattern is as follows:
based on the outline edge position of the target object and combining the obstacle avoidance distance, controlling the self-moving equipment to execute a work task in a surrounding area with the distance from the target object being greater than the obstacle avoidance distance; and when the self-moving equipment climbs to the position above the target object to execute the operation task, controlling the self-moving equipment to drive the side brush to execute the cleaning task above the target object according to the first side brush rotating speed based on the height information of the upper edge of the outline in the outline parameters. The upper edge of the contour refers to the edge far away from the working surface in the contour, and is the edge which is highest relative to other edges in the contour.
The operation difficulty of the target object can be evaluated by combining the height information of the upper edge of the outline, and the target object can be further classified. Taking carpets as an example, some carpets are long-hair carpets and some carpets are short-hair carpets. The height of the contoured upper edge of the long-hair carpet is greater than the height of the contoured upper edge of the short-hair carpet,
long-hair carpets are relatively difficult to clean than short-hair carpets. Both short-hair carpets and long-hair carpets require increased blower suction, and long-hair carpets require blower suction that is greater than that of short-hair carpets, but do not require significant blower suction on hard floors. Therefore, the suction force of the fan in the rolling brush of the self-moving equipment can be adjusted according to the height information of the upper edge of the profile, and the cruising ability of the self-moving equipment can be guaranteed under certain cleaning force. Thus, the self-moving device operation can be further purposefully controlled in combination with the height information of the upper edge of the contour.
Case 4: and when the target object class is easy to be stained, selecting the enhanced operation behavior mode as a target machine behavior mode matched with the target object class. Accordingly, one implementation procedure for controlling the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object is: and controlling the self-mobile equipment to execute the operation task aiming at the target object according to the reinforced operation behavior mode based on the outline edge position in the outline parameter of the target object.
In some alternative embodiments of the application, the enhanced job behavior pattern includes: a third indication parameter for indicating the strengthening operation and a third execution parameter required by the strengthening operation, wherein the third execution parameter at least comprises the operation times and a second side brush rotating speed which is larger than a rotating speed threshold;
accordingly, controlling the self-mobile device to perform a job task for the target object in an enhanced job behavior mode based on the contour edge position in the contour parameter of the target object, comprising:
controlling the self-moving equipment to execute operation tasks for a plurality of times in the surrounding area of the target object according to the operation times based on the outline edge position in the outline parameter of the target object; and in the process of executing the working task each time, controlling the self-moving equipment to drive the side brush to execute the cleaning task in the area around the target object according to the rotating speed of the second side brush. The second side brush rotating speed is set according to actual application requirements. The second side brush speed may be a greater side brush speed that is greater than the speed threshold.
It should be appreciated that the self-moving device performs the task multiple times within a surrounding area greater than the obstacle avoidance distance from the target object.
Case 5: in the case that the target object class is movable, a voice prompt behavior mode is selected as a target machine behavior mode adapted to the target object class. Accordingly, one implementation procedure for controlling the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object is: controlling the self-mobile device to send out voice prompt information aiming at the target object according to the voice prompt behavior mode based on the position information of the target object so as to prompt the target object to change the state of the target object; and combining the structured light data collected for the target object, identifying the latest state of the target object, and continuously controlling the self-mobile equipment to execute the operation task for the target object under the condition that the latest state meets the voice prompt requirement.
When the movable target object blocks the self-moving device from moving forward continuously, the self-moving device can prompt the movable target object to change the gesture by voice so that the self-moving device can move forward continuously.
The movable target object is taken as an example, and the robot cannot clean the floor at the position where the person is located. Therefore, the robot can remind the user to avoid (when the user stands) or remind the user to lift the feet (when the user sits), namely remind the user to change the state by playing the voice prompt. Because the person is in the sitting state, the image data collected by the vision sensor only can identify the approximate position of the person, but can not judge whether the feet of the person are placed on the ground, when the floor sweeping robot reminds the user of lifting the feet by voice, the user can not judge whether the feet of the person are lifted only by the identification result of the image data, and the structured light assembly can judge whether the feet of the user are lifted by comparing whether the approximate position of the person is changed before and after the prompting voice. If the floor sweeping robot is lifted, the floor sweeping robot passes through the user to clean, otherwise, the floor sweeping robot bypasses the user to clean.
In order to facilitate understanding, the traveling control method of the self-mobile device provided by the embodiment of the application is described in detail by taking the self-mobile device as a home service robot example and combining the scene of the home service robot for executing tasks in a home environment.
Application scenario example 1:
home service robots mainly work in a home environment. As shown in fig. 6, which is a typical household pattern in real life, the operation area of the home service robot may be a main sleeping area, a living room area, a secondary sleeping area, a kitchen area, a bathroom area, a balcony area, etc. In the running process in the working area of the home service robot, a visual sensor (such as an RGB camera) in the structured light module is utilized to collect RGB image data in the home environment, and the type of a target object existing in the working area in front of the home service robot is identified based on the RGB image data.
If the robot encounters an obstacle which is easy to be trapped and can pass through, the home service robot can collect the structured light data of the sliding door rail, particularly control line laser transmitters to transmit line lasers to the sliding door rail, and a laser camera is used for collecting laser images comprising laser stripes formed on the sliding door rail by the line lasers. Based on the information such as the position, the length, the height, the angle and the like of the sliding door sliding rail are more accurately identified by the structured light data, the home service robot adjusts the body gesture according to the related information of the sliding door sliding rail, so that the home service robot and the sliding door sliding rail form a proper included angle, and meanwhile, when the distance between the home service robot and the sliding door sliding rail reaches the obstacle avoidance distance, the home service robot is accelerated to pass through the sliding door sliding rail, and the obstacle avoidance performance of the home service robot is improved by proper angle and speed.
If encountering obstacles such as a U-shaped chair, a bar counter chair and the like which are easy to be trapped, based on the collected structured light data of the U-shaped chair, the accurate positions of two chair legs of the U-shaped chair can be identified, so that the home service robot can avoid the U-shaped chair legs and cannot leak to sweep the area in the middle of the U-shaped chair legs. Based on the collected structured light data of the bar counter chair, the accurate position of the whole base disc of the bar counter chair can be identified, so that the home service robot can accurately clean around the base disc without being trapped.
If encountering obstacles which are easy to wind, such as clothes, wires, carpets and the like, the contour edge positions of the objects can be accurately positioned based on the collected structured light data of the obstacles, so that the home service robot can approach the objects as close as possible on the premise of not winding, and the problem of missing scanning is avoided. Meanwhile, the structured light data can also give the length of the carpet wool, and confirm whether the carpet is a long wool carpet or a short wool carpet, thereby being beneficial to the home service robot to adjust proper rolling brush suction (the long wool carpet needs to increase fan suction, and the short wool carpet suction force is smaller than the long wool but is larger than the hard ground suction force), so that the endurance capacity of the home service robot can be ensured on the premise of keeping the cleaning force.
If the robot encounters an obstacle which is easy to pollute, such as a garbage can, a charging seat, shoes, bowls and the like, the contour edge position of the object can be accurately positioned based on the collected structured light data of the obstacle, so that the home service robot can be close to the object as much as possible on the premise of not winding, and the problem of missing scanning is avoided.
If a person is encountered, whether the feet of the user are lifted or not can be judged by comparing whether the approximate positions of the person are changed before and after the prompting voice based on the structured light data. If it has been lifted, the home service robot goes through the user to clean, otherwise, the home service robot bypasses the user cleaning.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 401 to 403 may be device a; for another example, the execution subject of steps 401 and 402 may be device a, and the execution subject of step 403 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations, such as 401, 402, etc., are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
Fig. 7 is a schematic structural diagram of an autonomous mobile apparatus according to an exemplary embodiment of the present application. As shown in fig. 7, the autonomous mobile apparatus includes: the device body 70, the device body 70 is provided with one or more memories 71, one or more processors 72 and a structured light module 73; the structured light module 73 includes: a structured light assembly 731 and a vision assembly 732. The structured light assembly 731 includes at least a laser camera 7311 and a line laser transmitter 7312. The visual component 732 includes at least a visual sensor 7321. In fig. 7, the line laser emitters 7312 are shown as being distributed on both sides of the laser camera 7311, but is not limited thereto. Other implementation structures of the structural optical module 73 can be seen in the foregoing embodiments, and are not described herein.
Wherein the one or more memories 71 are for storing a computer program; the one or more processors 72 are operable to execute computer programs for:
respectively acquiring structured light data and image data in a front working area by using a structured light assembly and a visual sensor in a structured light module; identifying a target object category existing in the front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category; with the aid of the structured light data, the self-moving device is controlled to execute the task for the target object existing in the front working area according to the target machine behavior mode.
In some alternative embodiments of the application, prior to selecting the target machine behavior pattern that is adapted to the target object class, the processor 72 is further configured to:
identifying a target object profile present in the front working area based on the structured light data; and correcting the category of the target object according to the profile of the target object.
In some alternative embodiments of the present application, the processor 72 is specifically configured to, when modifying the target object class according to the target object profile:
under the condition that the outline of the target object is not matched with the category of the target object, the category of the object corresponding to the outline of the target object is taken as a reference object category, and the category of the target object is corrected according to the reference object category; wherein the different object categories have non-identical object contours;
and/or
And carrying out finer granularity division on the target object category according to the target object outline so as to obtain the sub-category under the target object category.
In some alternative embodiments of the present application, the processor 72 is specifically configured to, when correcting the target object class based on the reference object class:
under the condition that the difference degree between the target object class and the reference object class is smaller than a set threshold value, directly correcting the target object class into the reference object class;
Or alternatively
And determining an intermediate object class which can be excessively changed between the target object class and the reference object class, and correcting the target object class into the intermediate object class when the difference degree between the target object class and the reference object class is larger than or equal to a set threshold.
In some alternative embodiments of the application, the processor 72 is specifically configured to, when classifying the target object class in finer granularity according to the target object profile:
under the condition that the target object type is the easy-to-clamp type, combining the target object outline to determine whether the target object corresponding to the target object outline is a hollowed-out object or not;
under the condition that the target object is a hollowed-out object, combining the hollowed-out width of the target object and the body width of the self-moving equipment, dividing the target object into two categories of easy-to-clamp and non-passable and easy-to-clamp and passable.
In some alternative embodiments of the present application, the processor 72 is configured to control the self-mobile device to perform a task with respect to a target object present in a front work area in accordance with a target machine behavior pattern with the aid of the structured light data, specifically to:
identifying position information and/or profile parameters of a target object present in the front working area based on the structured light data;
And controlling the self-mobile equipment to execute the operation task aiming at the target object according to the target machine behavior mode according to the position information and/or the appearance parameters of the target object.
In some alternative embodiments of the present application, in the case where the target object class is trapped and not passable, the processor 72 is specifically configured to, when selecting the target machine behavior pattern that is adapted to the target object class: selecting an obstacle avoidance behavior mode as a target machine behavior mode;
accordingly, the processor 72 is specifically configured to, when controlling the self-mobile device to perform a task on the target object according to the target machine behavior pattern based on the position information and/or the profile parameter of the target object:
and controlling the self-mobile equipment to avoid the obstacle aiming at the target object according to the obstacle avoidance behavior mode based on the position information of the target object and the outline parameters in the outline parameters.
In some alternative embodiments of the present application, when the target object class is trapped and passable, the processor 72 is specifically configured to: selecting an acceleration passing behavior mode as a target machine behavior mode;
accordingly, the processor 72 is specifically configured to control the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object: and controlling the self-mobile equipment to pass through the hollowed-out area of the target object according to the acceleration passing behavior mode based on the position information of the target object and the hollowed-out width and height in the appearance parameters so as to continuously execute the operation task.
In some alternative embodiments of the application, the accelerated traffic behavior pattern includes: a first indication parameter for indicating the acceleration action and a first execution parameter required by the acceleration action, wherein the first execution parameter comprises a direction parameter, a distance parameter and a speed parameter;
accordingly, the processor 72 is specifically configured to, when controlling the self-mobile device to pass through the hollowed-out area of the target object to continue to execute the task according to the acceleration passing mode based on the position information of the target object and the hollowed-out width and height in the profile parameters: based on the position information of the target object and the hollowed-out width and height in the appearance parameters, the direction parameters are combined, and the direction of the self-moving equipment is adjusted so that the self-moving equipment faces the hollowed-out area of the target object;
and controlling the self-moving equipment to accelerate along the current direction until the self-moving equipment passes through the hollowed-out area of the target object according to the distance parameter and the speed parameter.
In some alternative embodiments of the present application, when the processor 72 selects the target machine behavior pattern adapted to the target object class, in particular, in the case that the target object class is winding-prone: selecting a deceleration operation behavior mode as a target machine behavior mode;
accordingly, according to the position information and/or the appearance parameters of the target object, the self-moving equipment is controlled to execute the operation task aiming at the target object according to the target machine behavior mode, and the method comprises the following steps:
And controlling the self-moving equipment to execute the operation task aiming at the target object according to the deceleration operation behavior mode based on the outline edge position in the outline parameters of the target object.
In some alternative embodiments of the application, the deceleration operation behavior pattern includes: a second indicating parameter for indicating the deceleration operation and a second executing parameter required by the deceleration operation, wherein the second executing parameter at least comprises an obstacle avoidance distance and a first side brush rotating speed smaller than a rotating speed threshold value;
accordingly, the processor 72 is specifically configured to, when controlling the self-moving device to perform a task on the target object according to the deceleration operation behavior mode based on the contour edge position in the contour parameter of the target object:
based on the outline edge position of the target object and combining the obstacle avoidance distance, controlling the self-moving equipment to execute a work task in a surrounding area with the distance from the target object being greater than the obstacle avoidance distance; and
when the self-moving equipment climbs to the position above the target object to execute the operation task, the self-moving equipment is controlled to drive the side brush to execute the cleaning task above the target object according to the first side brush rotating speed based on the height information of the outline outer edge in the outline parameters.
In some alternative embodiments of the present application, when the processor 72 selects the target machine behavior pattern adapted to the target object class, in particular, in the case that the target object class is dirty-prone: selecting an enhanced job behavior mode as a target machine behavior mode;
Accordingly, according to the position information and/or the appearance parameters of the target object, the self-moving equipment is controlled to execute the operation task aiming at the target object according to the target machine behavior mode, and the method comprises the following steps:
and controlling the self-mobile equipment to execute the operation task aiming at the target object according to the reinforced operation behavior mode based on the outline edge position in the outline parameter of the target object.
In some alternative embodiments of the application, the enhanced job behavior pattern includes: a third indication parameter for indicating the strengthening operation and a third execution parameter required by the strengthening operation, wherein the third execution parameter at least comprises the operation times and a second side brush rotating speed which is larger than a rotating speed threshold;
accordingly, the processor 72 is specifically configured to, when controlling the self-mobile device to perform a task on the target object according to the enhanced task behavior mode based on the contour edge position in the contour parameter of the target object:
controlling the self-moving equipment to execute operation tasks for a plurality of times in the surrounding area of the target object according to the operation times based on the outline edge position in the outline parameter of the target object; and
and in the process of executing the working task each time, controlling the self-moving equipment to drive the side brush to execute the cleaning task in the area around the target object according to the rotating speed of the second side brush.
In some alternative embodiments of the present application, when the processor 72 selects the target machine behavior pattern adapted to the target object class, in particular, in the case that the target object class is movable, the method is specifically used for: selecting a voice prompt behavior mode as a target machine behavior mode;
accordingly, the processor 72 is specifically configured to control the self-mobile device to execute the task for the target object according to the target machine behavior mode according to the position information and/or the shape parameter of the target object:
controlling the self-mobile device to send out voice prompt information aiming at the target object according to the voice prompt behavior mode based on the position information of the target object so as to prompt the target object to change the state of the target object; and
and combining the structured light data collected for the target object, identifying the latest state of the target object, and continuously controlling the self-mobile equipment to execute the operation task for the target object under the condition that the latest state meets the voice prompt requirement.
In some alternative embodiments of the present application, processor 72, when selecting a target machine behavior pattern that is adapted to a target object class, is specifically configured to:
and inquiring the known object type and the machine behavior mode corresponding to the known object type according to the target object type to obtain the machine behavior mode corresponding to the target object type as a target machine behavior mode.
In some alternative embodiments of the application, the processor 72 is further configured to:
displaying the known object category, and responding to a first modification operation initiated for the known object category to acquire a modified object category; known object categories are set by the user on the environment map and/or identified from the mobile device based on historical image data;
wherein the first modifying operation comprises at least one of: modifying the name of the object class, adjusting the object corresponding to the object class, and deleting the known object class.
In some alternative embodiments of the application, the machine behavior pattern includes at least behavior parameters and behavior actions required to perform a job task from the mobile device, and the processor 72 is further configured to:
displaying a machine behavior mode corresponding to the known object type, responding to a second modification operation initiated for the machine behavior mode, and acquiring a modified machine behavior mode;
wherein the second modifying operation comprises at least one of: modifying existing behavior parameters, adding new behavior parameters, deleting existing behavior parameters, modifying existing machine behavior actions, adding new machine behavior actions, and deleting existing machine behavior actions.
Further, the autonomous mobile apparatus of the present embodiment may further include some basic components, such as a communication component 74, a power supply component 75, a driving component 76, and the like, in addition to the various components mentioned above.
Wherein the one or more memories are primarily for storing a computer program executable by the host controller to cause the host controller to control the autonomous mobile apparatus to perform the respective tasks. In addition to storing computer programs, the one or more memories may be configured to store other various data to support operations on the autonomous mobile device. Examples of such data include instructions for any application or method operating on the autonomous mobile device, map data of the environment/scene in which the autonomous mobile device is located, operating modes, operating parameters, and so forth.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as Wifi,2G or 3G, 4G, 5G or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may further include a Near Field Communication (NFC) module, radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and the like.
Alternatively, the drive assembly may include a drive wheel, a drive motor, a universal wheel, or the like. Optionally, the autonomous mobile apparatus of the present embodiment may be implemented as a sweeping robot, and then in the case of being implemented as a sweeping robot, the autonomous mobile apparatus may further include a cleaning assembly, which may include a cleaning motor, a cleaning brush, a dust suction fan, and the like. These basic components, as well as the composition of the basic components, may vary from autonomous mobile apparatus to autonomous mobile apparatus, and embodiments of the present application are merely examples.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform acts comprising: respectively acquiring structured light data and image data in a front working area by utilizing a structured light assembly and a vision sensor in the structured light module; identifying a target object class existing in a front working area based on the image data, and selecting a target machine behavior pattern adapted to the target object class; and with the aid of the structured light data, controlling the self-moving equipment to execute a work task aiming at a target object existing in a front work area according to the target machine behavior mode.
In addition to the above, when the computer instructions are executed by one or more processors, the one or more processors may also cause the one or more processors to perform other acts, which are described in detail in the method of fig. 4 and are not described here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (21)
1. A method of operation for a self-moving device having a structured light module, the method comprising:
respectively acquiring structured light data and image data in a front working area by utilizing a structured light assembly and a vision sensor in the structured light module;
identifying a target object category existing in a front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category;
under the assistance of the structured light data, controlling the self-moving equipment to execute a work task aiming at a target object existing in a front work area according to the target machine behavior mode, wherein the rotating speed of an edge brush of the self-moving equipment is controlled when the work task is executed on the surrounding area of the target object;
if the target object comprises at least one of the following: the U-shaped chair, the bar counter chair and the sliding door sliding rail accelerate the rotating speed of the side brush in the surrounding area of the target object;
If the target object comprises at least one of the following: clothes, carpet edges and wires, closing or slowing down the rotation speed of the side brush of the self-moving equipment when executing a work task on the surrounding area of the target object;
if the target object comprises at least one of the following: the garbage can, the charging seat, the shoes and the bowls can accelerate the rotating speed of the side brush when the operation task is executed on the surrounding area of the target object.
2. The method of claim 1, further comprising, prior to selecting the target machine behavior pattern that is adapted to the target object class:
identifying a target object profile present in a front working area based on the structured light data;
and correcting the target object class according to the target object profile.
3. The method of claim 2, wherein modifying the target object class based on the target object profile comprises:
when the target object profile is not matched with the target object type, taking the object type corresponding to the target object profile as a reference object type, and correcting the target object type according to the reference object type; wherein the different object categories have non-identical object contours;
And/or
And dividing the target object category into finer granularity according to the target object outline so as to obtain the sub-category under the target object category.
4. A method according to claim 3, wherein modifying the target object class according to the reference object class comprises:
directly correcting the target object class into the reference object class under the condition that the difference degree between the target object class and the reference object class is smaller than a set threshold value;
or alternatively
And determining an intermediate object class which is excessive between the target object class and the reference object class, and correcting the target object class into the intermediate object class when the difference degree between the target object class and the reference object class is larger than or equal to a set threshold value.
5. A method according to claim 3, wherein the dividing the target object class into sub-classes under the target object class according to the target object profile comprises:
under the condition that the target object type is the easy-to-clamp type, combining the target object outline to determine whether the target object corresponding to the target object outline is a hollowed-out object or not;
Under the condition that the target object is a hollowed-out object, combining the hollowed-out width of the target object and the body width of the self-moving equipment, dividing the target object into two categories of easy-to-clamp and non-passable and easy-to-clamp and passable.
6. The method according to any of claims 1-5, characterized in that with the aid of the structured light data controlling the self-moving device in accordance with the target machine behavior pattern to perform a work task on a target object present in a front work area comprises:
identifying position information and/or profile parameters of a target object present in a front working area based on the structured light data;
and controlling the self-moving equipment to execute a work task aiming at the target object according to the target machine behavior mode according to the position information and/or the appearance parameters of the target object.
7. The method of claim 6, wherein selecting a target machine behavior pattern that is adapted to the target object class if the target object class is stranded and non-passable comprises: selecting an obstacle avoidance behavior mode as the target machine behavior mode;
Correspondingly, according to the position information and/or the appearance parameters of the target object, controlling the self-mobile device to execute the job task aiming at the target object according to the target machine behavior mode comprises the following steps:
and controlling the self-mobile device to avoid the obstacle for the target object according to the obstacle avoidance behavior mode based on the position information of the target object and the outline parameters in the outline parameters.
8. The method according to any one of claims 1-5, wherein, in case the target object class is stranded and passable, selecting a target machine behavior pattern adapted to the target object class comprises: selecting an acceleration traffic behavior pattern as the target machine behavior pattern;
correspondingly, according to the position information and/or the appearance parameters of the target object, controlling the self-mobile device to execute the job task aiming at the target object according to the target machine behavior mode comprises the following steps:
and controlling the self-mobile device to pass through the hollowed-out area of the target object according to the acceleration passing behavior mode based on the position information of the target object and the hollowed-out width and height in the appearance parameters so as to continuously execute the operation task.
9. The method of claim 8, wherein the accelerated traffic behavior pattern comprises: a first indication parameter indicating an acceleration action and a first execution parameter required by the acceleration action, wherein the first execution parameter comprises a direction parameter, a distance parameter and a speed parameter;
correspondingly, based on the position information of the target object and the hollowed-out width and height in the appearance parameters, controlling the self-mobile device to pass through the hollowed-out area of the target object according to the acceleration passing mode so as to continue to execute the operation task, wherein the method comprises the following steps:
based on the position information of the target object and the hollowed-out width and height in the appearance parameters, the direction parameters are combined, and the direction of the self-moving equipment is adjusted so that the self-moving equipment faces the hollowed-out area of the target object;
and controlling the self-moving equipment to accelerate along the current direction until the self-moving equipment passes through the hollowed-out area of the target object according to the distance parameter and the speed parameter.
10. The method according to any one of claims 1-5, wherein, in case the target object class is winding-prone, selecting a target machine behavior pattern adapted to the target object class comprises: selecting a deceleration operation behavior mode as the target machine behavior mode;
Correspondingly, according to the position information and/or the appearance parameters of the target object, controlling the self-mobile device to execute the job task aiming at the target object according to the target machine behavior mode comprises the following steps:
and controlling the self-moving equipment to execute a work task aiming at the target object according to the deceleration work behavior mode based on the outline edge position in the outline parameters of the target object.
11. The method of claim 10, wherein the deceleration operation behavior pattern comprises: a second indication parameter for indicating the deceleration operation and a second execution parameter required by the deceleration operation, wherein the second execution parameter at least comprises an obstacle avoidance distance and a first side brush rotating speed smaller than a rotating speed threshold value;
correspondingly, based on the contour edge position in the contour parameters of the target object, controlling the self-mobile device to execute the task for the target object according to the deceleration operation behavior mode comprises the following steps:
based on the outline edge position of the target object and in combination with the obstacle avoidance distance, controlling the self-moving equipment to execute a work task in a surrounding area with the distance from the target object being greater than the obstacle avoidance distance; and
And when the self-moving equipment climbs to the position above the target object to execute a working task, controlling the self-moving equipment to drive the side brush to execute a cleaning task above the target object according to the first side brush rotating speed based on the height information of the outline outer edge in the outline parameter.
12. The method according to any of claims 1-5, wherein, in case the target object class is soil-prone, selecting a target machine behavior pattern adapted to the target object class comprises: selecting an enhanced job behavior mode as the target machine behavior mode;
correspondingly, according to the position information and/or the appearance parameters of the target object, controlling the self-mobile device to execute the job task aiming at the target object according to the target machine behavior mode comprises the following steps:
and controlling the self-mobile device to execute a work task aiming at the target object according to the enhanced work behavior mode based on the outline edge position in the outline parameter of the target object.
13. The method of claim 12, wherein the enhanced job behavior pattern comprises: a third indication parameter for indicating the strengthening operation and a third execution parameter required by the strengthening operation, wherein the third execution parameter at least comprises operation times and a second side brush rotating speed which is larger than a rotating speed threshold;
Accordingly, based on the contour edge position in the contour parameter of the target object, controlling the self-mobile device to execute the task for the target object according to the enhanced task behavior mode includes:
controlling the self-moving equipment to execute operation tasks for a plurality of times in the surrounding area of the target object according to the operation times based on the outline edge position in the outline parameters of the target object; and
and in the process of executing the working task each time, controlling the self-moving equipment to drive the side brush to execute the cleaning task in the area around the target object according to the rotating speed of the second side brush.
14. The method according to any of claims 1-5, wherein, in case the target object class is movable, selecting a target machine behavior pattern adapted to the target object class comprises: selecting a voice prompt behavior mode as the target machine behavior mode;
correspondingly, according to the position information and/or the appearance parameters of the target object, controlling the self-mobile device to execute the job task aiming at the target object according to the target machine behavior mode comprises the following steps:
based on the position information of the target object, controlling the self-mobile device to send out voice prompt information aiming at the target object according to the voice prompt behavior mode so as to prompt the target object to change the state of the target object; and
And combining the structured light data collected for the target object, identifying the latest state of the target object, and continuously controlling the self-mobile device to execute the operation task for the target object under the condition that the latest state meets the voice prompt requirement.
15. The method according to any one of claims 1-5, wherein selecting a target machine behavior pattern adapted to the target object class comprises:
and inquiring the known object type and the machine behavior mode corresponding to the known object type according to the target object type so as to obtain the machine behavior mode corresponding to the target object type as the target machine behavior mode.
16. The method as recited in claim 15, further comprising:
displaying a known object category, and responding to a first modification operation initiated for the known object category to acquire a modified object category; the known object class is set by a user on an environment map and/or identified from the mobile device based on historical image data;
wherein the first modifying operation includes at least one of: modifying the name of the object class, adjusting the object corresponding to the object class, and deleting the known object class.
17. The method of claim 15, wherein the machine behavior pattern includes at least behavioral parameters and behavioral actions required to perform a job task from the mobile device, the method further comprising:
displaying a machine behavior mode corresponding to the known object type, responding to a second modification operation initiated for the machine behavior mode, and acquiring a modified machine behavior mode;
wherein the second modifying operation includes at least one of: modifying existing behavior parameters, adding new behavior parameters, deleting existing behavior parameters, modifying existing machine behavior actions, adding new machine behavior actions, and deleting existing machine behavior actions.
18. A self-moving device, comprising: the device comprises a device body, a first optical module and a second optical module, wherein the device body is provided with one or more memories, one or more processors and the structured light module; the structured light module comprises: a structured light assembly and a vision sensor;
the one or more memories are used for storing computer programs; the one or more processors configured to execute the computer program to:
respectively acquiring structured light data and image data in a front working area by utilizing a structured light assembly and a vision sensor in the structured light module;
Identifying a target object category existing in a front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category;
and with the aid of the structured light data, controlling the self-moving equipment to execute a work task aiming at a target object existing in a front work area according to the target machine behavior mode.
19. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1-17.
20. A method of operation for a self-moving device having a structured light module, the method comprising:
respectively acquiring structured light data and image data in a front working area by utilizing a structured light assembly and a vision sensor in the structured light module; the vision sensor includes at least one of: monocular RGB camera, binocular RGB camera; the structure light assembly comprises a laser camera and line laser transmitters distributed on two sides of the laser camera;
identifying a target object category existing in a front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category;
With the aid of the structured light data, controlling the self-moving equipment to execute a work task aiming at a target object existing in a front work area according to the target machine behavior mode; wherein, when executing the operation task to the surrounding area of the target object, the rotating speed of the side brush of the self-moving equipment is controlled;
the target object comprises at least one of the following: garbage bin, charging seat, shoes, bowl basin, U type chair, bar counter chair, sliding door slide rail, clothes, carpet edge, electric wire, people and animal.
21. A method of operation for a self-moving device having a structured light module, the method comprising:
respectively acquiring structured light data and image data in a front working area by utilizing a structured light assembly and a vision sensor in the structured light module;
identifying a target object category existing in a front working area based on the image data, and selecting a target machine behavior mode adapted to the target object category;
with the aid of the structured light data, controlling the self-moving equipment to execute a work task aiming at a target object existing in a front work area according to the target machine behavior mode;
The self-moving equipment executes the operation tasks for a plurality of times in the surrounding area with the distance from the target object being larger than the obstacle avoidance distance, and the rotating speed of the side brush is controlled in the process of executing the operation tasks;
when the target object is a sliding door sliding rail, accelerating to pass through the sliding door sliding rail when the distance between the self-moving equipment and the sliding door sliding rail reaches the obstacle avoidance distance, and accelerating the rotating speed of the side brush in the process of accelerating to pass through the sliding door sliding rail; and under the condition that the sliding door slide rail passes through and is separated from the sliding door slide rail, adjusting the rotating speed of the side brush to be a normal rotating speed;
when the distance between the self-moving equipment and the U-shaped chair or the bar counter chair reaches the obstacle avoidance distance, the self-moving equipment is accelerated to pass through the U-shaped chair or the bar counter chair, and the rotating speed of the side brush is accelerated in the process of accelerating to pass through the U-shaped chair or the bar counter chair; and adjusting the rotating speed of the side brush to be a normal rotating speed under the condition that the side brush passes through and is separated from the U-shaped chair or the bar counter chair;
when the target object is clothes or an electric wire, slowing down the rotating speed of the side brush in a surrounding area which is larger than the obstacle avoidance distance from the target object;
and when the target object is a carpet, the rotating speed of the side brush is reduced in a surrounding area which is larger than the obstacle avoidance distance from the target object, and when the moving equipment climbs to the upper side of the target object to execute a work task, the rotating speed of the side brush is reduced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310706115.1A CN116711996A (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310706115.1A CN116711996A (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
CN202110944997.6A CN113786125B (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110944997.6A Division CN113786125B (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116711996A true CN116711996A (en) | 2023-09-08 |
Family
ID=78876138
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110944997.6A Active CN113786125B (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
CN202310706115.1A Pending CN116711996A (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110944997.6A Active CN113786125B (en) | 2021-08-17 | 2021-08-17 | Operation method, self-mobile device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113786125B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4385384A1 (en) * | 2021-08-17 | 2024-06-19 | Ecovacs Robotics Co., Ltd. | Structured light module and self-moving device |
CN114287832A (en) * | 2021-12-30 | 2022-04-08 | 安徽协创物联网技术有限公司 | Floor sweeping robot system |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412394B (en) * | 2016-09-26 | 2019-04-30 | 上海应用技术大学 | A kind of suitable road is bowed the safety trip device of race |
CN107092252A (en) * | 2017-04-11 | 2017-08-25 | 杭州光珀智能科技有限公司 | A kind of robot automatic obstacle avoidance method and its device based on machine vision |
US20210138640A1 (en) * | 2018-04-09 | 2021-05-13 | Lg Electronics Inc. | Robot cleaner |
CN110179404A (en) * | 2019-05-22 | 2019-08-30 | 深圳飞科机器人有限公司 | Clean robot, clean method and computer readable storage medium |
CN110622085A (en) * | 2019-08-14 | 2019-12-27 | 珊口(深圳)智能科技有限公司 | Mobile robot and control method and control system thereof |
CN112155487A (en) * | 2019-08-21 | 2021-01-01 | 追创科技(苏州)有限公司 | Sweeping robot, control method of sweeping robot and storage medium |
CN110974088B (en) * | 2019-11-29 | 2021-09-24 | 深圳市杉川机器人有限公司 | Sweeping robot control method, sweeping robot and storage medium |
CN111421553A (en) * | 2020-05-14 | 2020-07-17 | 小狗电器互联网科技(北京)股份有限公司 | Robot control method based on contour recognition, storage medium and mobile robot |
CN112363513B (en) * | 2020-11-25 | 2024-06-18 | 珠海一微半导体股份有限公司 | Obstacle classification obstacle avoidance control method based on depth information |
-
2021
- 2021-08-17 CN CN202110944997.6A patent/CN113786125B/en active Active
- 2021-08-17 CN CN202310706115.1A patent/CN116711996A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN113786125A (en) | 2021-12-14 |
CN113786125B (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112415998B (en) | Obstacle classification obstacle avoidance control system based on TOF camera | |
CN113786125B (en) | Operation method, self-mobile device, and storage medium | |
CN111142526B (en) | Obstacle crossing and operation method, equipment and storage medium | |
CN107224249B (en) | Cleaning operation execution method and device of cleaning equipment and readable storage medium | |
AU2014278987B2 (en) | Cleaning robot and method for controlling the same | |
CN109514582B (en) | Pet teasing control device for robot and mobile robot | |
CN104825101A (en) | Robot cleaner and controlling method thereof | |
CN110946508B (en) | Control method and device of sweeping robot using laser radar and camera | |
CN112890680B (en) | Follow-up cleaning operation method, control device, robot and storage medium | |
CN109479746B (en) | Robot pet teasing control method and chip | |
US20210187732A1 (en) | Artificial intelligence cleaning robot and method thereof | |
CN114527736B (en) | Dilemma avoidance method, autonomous mobile device, and storage medium | |
US11397437B2 (en) | System with a first floor processing device and a second floor processing device as well as a method for operating such a system | |
CN111714028A (en) | Method, device and equipment for escaping from restricted zone of cleaning equipment and readable storage medium | |
CN110967703A (en) | Indoor navigation method and indoor navigation device using laser radar and camera | |
EP4388961A1 (en) | Cleaning robot, control method and apparatus therefor, and electronic device and storage medium | |
CN112754363A (en) | Cleaning control method, cleaning control device, cleaning apparatus, and storage medium | |
CN110881909A (en) | Control method and device of sweeper | |
CN116416518A (en) | Intelligent obstacle avoidance method and device | |
CN116211168A (en) | Operation control method and device of cleaning equipment, storage medium and electronic device | |
CN110946512A (en) | Sweeping robot control method and device based on laser radar and camera | |
CN113741441A (en) | Operation method and self-moving equipment | |
CN111657791A (en) | Remote control cleaning method and device | |
CN115568785A (en) | Method for controlling operation of sweeping robot, related device and storage medium | |
CN109557908B (en) | Robot navigation method and system based on projection virtual wall |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |