CN115469648A - Operation method, self-moving device and storage medium - Google Patents

Operation method, self-moving device and storage medium Download PDF

Info

Publication number
CN115469648A
CN115469648A CN202110648198.4A CN202110648198A CN115469648A CN 115469648 A CN115469648 A CN 115469648A CN 202110648198 A CN202110648198 A CN 202110648198A CN 115469648 A CN115469648 A CN 115469648A
Authority
CN
China
Prior art keywords
area
dirty
target
stain
job
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110648198.4A
Other languages
Chinese (zh)
Inventor
邹雨程
田美芹
鲍亮
吴牟雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN202110648198.4A priority Critical patent/CN115469648A/en
Publication of CN115469648A publication Critical patent/CN115469648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides an operation method, self-moving equipment and a storage medium. In the embodiment of the application, the self-moving equipment supports multiple operation modes, different operation modes correspond to different region attributes, and different region attributes correspond to different regions, which means that the self-moving equipment can execute operation tasks for different regions according to different operation modes, and the operation tasks can be flexibly and pertinently executed by the self-moving equipment through rich and diverse operation modes, so that the operation requirements of users are met. Furthermore, when the operation tasks are executed aiming at different areas, the operation parameters adaptive to the different areas can be adopted, so that the operation execution effect is ensured, the overall operation efficiency is improved, and the related resources required by the execution of the operation tasks are saved.

Description

Operation method, self-moving device and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an operation method, self-moving equipment and a storage medium.
Background
With the rapid development of artificial intelligence, the self-moving robot is gradually applied to daily life of people, for example, intelligent robots such as a floor sweeping robot, a window cleaning robot and a cleaning machine are applied, so that a lot of workload is reduced for people, and great convenience is brought to the daily life of people.
The existing self-moving robot usually adopts a certain traversal mode such as a Chinese character 'gong' type to execute a task in the whole operation area, and the operation mode lacks flexibility and pertinence and cannot meet the operation requirements of users.
Disclosure of Invention
Aspects of the present application provide an operation method, a mobile device, and a storage medium to provide richer and more diverse operation modes, so as to execute an operation task more flexibly and specifically, and meet the operation requirements of a user.
The embodiment of the application provides an operation method, which is suitable for self-moving equipment, and comprises the following steps: receiving a working instruction; determining a target operation mode according to the operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to a target region attribute; according to the target operation mode, at least partial area with the target area attribute in the first area is identified as a second area to be used for executing an operation task; and executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
The embodiment of the present application further provides an operation method, which is applicable to a self-moving device, and the method includes: identifying a dirty area existing in the first area in a case where the area work instruction is received; identifying at least two kinds of information in stain attributes, ground attributes and scene information in the stain area according to the environment image of the stain area; determining an operation parameter matched with the stain area according to the at least two kinds of information; and executing the operation task in the dirty area by adopting the operation parameters.
An embodiment of the present application further provides a self-moving device, including: the device comprises a device body, a control unit and a display unit, wherein the device body is provided with a processor and a memory for storing a computer program; the processor to execute the computer program to: receiving a working instruction; determining a target operation mode according to the operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to the target region attributes; according to the target operation mode, at least partial area with the target area attribute in the first area is identified as a second area to be used for executing an operation task; and executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement any one of the steps of the embodiments of the method of the present application.
In this embodiment, the self-moving device supports multiple operation modes, and different operation modes correspond to different region attributes, and different region attributes correspond to different regions, which means that the self-moving device can execute operation tasks for different regions according to different operation modes, and through rich and diverse operation modes, the self-moving device is facilitated to execute operation tasks more flexibly and pertinently, and the operation requirements of users are met. Furthermore, when the operation tasks are executed aiming at different areas, the operation parameters adaptive to the different areas can be adopted, so that the operation execution effect is ensured, the overall operation efficiency is improved, and the related resources required by the execution of the operation tasks are saved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1a is a flow chart of a method of operation according to an embodiment of the present application;
FIG. 1b is a flow chart illustrating another method of operation according to an embodiment of the present application;
FIG. 1c is a flow chart of another method of operation provided by the present application;
FIG. 1d is a flowchart illustrating another method of operation according to the present embodiment;
FIG. 2a is a flow chart of another method of operation according to the present application;
FIG. 2b is a flow chart of another method of operation provided by the present application;
fig. 3a is a schematic mechanism diagram of a self-moving device according to an embodiment of the present application;
fig. 3b is a schematic diagram illustrating a mounting manner of a camera on a mobile device according to an embodiment of the present application;
fig. 3c is a schematic diagram illustrating another way to mount a camera on a mobile device according to an embodiment of the present application;
fig. 3d is a schematic diagram of another installation manner of a camera on a mobile device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The self-moving equipment is widely applied to daily life of people, and brings great convenience to the life and work of people. In the embodiment of the present application, the self-moving device may be any mechanical device that can autonomously move in its environment and can perform a certain task, for example, a robot, a cleaner, an unmanned vehicle, and the like. The robot may include a sweeping robot, a mopping robot, a sweeping and mopping integrated robot, a glass cleaning robot, a family accompanying robot, a guest greeting robot, an autonomous service robot, and the like, which are not limited herein. The autonomous mobile devices can autonomously move by means of power provided by the rechargeable batteries, and can collect environmental information of a working environment and construct or update an environmental map corresponding to the working environment according to the environmental information.
In the embodiment of the present application, the self-moving device may support a plurality of different operation modes, and the different operation modes correspond to different area attributes, that is, in the different operation modes, the self-moving device may execute the operation task for areas with different area attributes. Wherein the different region attributes correspond to different environmental regions. According to different application scenarios, the multiple operation modes supported by the mobile device may be different, and accordingly, the area attributes corresponding to different operation modes may also be different. The following examples illustrate:
in an alternative embodiment, the plurality of job modes supported from the mobile device may include: a whole-area operation mode and an area operation mode; the whole area operation mode is an operation mode which aims at the first area and needs to execute operation tasks in the whole first area from the mobile equipment; accordingly, the area job mode refers to a job mode in which a job task needs to be executed in a local area having a certain attribute from the mobile device with respect to the entire first area. The area attribute corresponding to the full area operation mode is attribute information that can limit the operation area to the entire range (i.e., the entire first area), and may be, for example, a full area attribute; the area attribute corresponding to the area operation mode is attribute information that can limit the operation area within a certain local range, and may be, for example, a stain attribute, a living room attribute, a bedroom attribute, a tea table attribute, a table attribute, or other local attributes. Wherein the stain attribute is used to define a stain area in the first area, the living room attribute is used to define a living room area in the first area, the bedroom attribute is used to define a bedroom area in the first area, the tea table attribute is used to define an area under a tea table in the first area, and the dining table attribute is used to define an area under a dining table in the first area. It should be noted that the above-listed expression modes of the region attribute are only examples, and any expression mode of the attribute that can define a certain local region is applicable to the embodiments of the present application.
In the embodiment of the application, the first area represents the whole operation area corresponding to the operation requirement, and the specific implementation and the coverage range of the first area can be flexibly set according to the application scenario and the operation requirement, which is not limited herein. Taking a home scenario as an example, the first area may be all areas in the home environment, or may be a local area in the home environment, such as one or more areas of a bedroom area, a living room area, a kitchen area, a balcony area, a study area, and a bathroom area. Taking the first area as an example of all areas in the home environment, if the full-area operation mode is adopted, the self-mobile device determines that operation tasks need to be executed aiming at all areas in the home environment according to the area attribute corresponding to the full-area operation mode, and then the operation tasks are executed aiming at all areas in the home environment; if the regional operation mode is adopted, the self-moving device determines a local region to be executed with the operation task according to the regional attribute corresponding to the regional operation mode, for example, the local region is a kitchen region or a living room region, and then executes the operation task aiming at the local region.
In the embodiment of the application, when factory setting is performed on the mobile device, a plurality of operation modes supported by the mobile device can be set, and the area attribute corresponding to each operation mode is built in the mobile device; alternatively, a plurality of job modes supported by the mobile device may be set by the user according to application requirements, and the area attribute corresponding to each job mode may be configured. For example, the user may install an APP corresponding to the mobile device on the terminal device, set a plurality of job modes supported by the mobile device through the APP, and configure an area attribute corresponding to each job mode. Alternatively, in the case where the mobile device is provided with a touch screen, the user may set a plurality of operation modes supported by the mobile device through the touch screen, and configure an area attribute corresponding to each operation mode.
In one usage mode, multiple job modes supported by the self-moving device can be enabled at the same time, the self-moving device can select a target job mode from the multiple job modes according to a job instruction sent by a user, and then execute a job task for an area with a target area attribute, wherein the target area attribute refers to an area attribute corresponding to the target job mode. The user can flexibly adjust the actually enabled operation mode of the self-mobile device according to the application requirement, so that the self-mobile device is in another use mode. In another usage mode, the user may set the self-moving device to enable only a specific job mode, in which case, the self-moving device may determine, according to a job instruction issued by the user, an area having an area attribute corresponding to the specific job mode in the first area, and then execute a job task for the area. For example, in a case where the self-moving device supports a full-area operation mode and an area operation mode, the self-moving device may enable the full-area operation mode and the area operation mode at the same time, and may select the full-area operation mode or the area operation mode according to a job instruction of a user, where when the full-area operation mode is selected, an operation task is performed for an entire first area, and when the area operation mode is selected, an operation task is performed for an area having an area attribute corresponding to the area operation mode in the first area; alternatively, the self-moving apparatus may enable only the full-area job mode or the area job mode, and may execute the job task for the entire first area according to the job instruction of the user in the case where the full-area job mode is enabled, and may execute the job task for a partial area having a certain attribute in the first area according to the job instruction of the user in the case where the area job mode is enabled.
According to the use requirements of users, the self-moving equipment supports switching between the two use modes. The following describes the process of executing job tasks in different modes by the self-moving device in detail through a detailed embodiment.
Fig. 1a is a schematic flowchart of an operation method according to an embodiment of the present application, and as shown in fig. 1a, the method includes:
s1a, receiving a work instruction;
s2a, determining a target operation mode according to an operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to the target region attribute;
s3a, identifying at least partial area with the target area attribute in the first area as a second area to be used for executing the job task;
and S4a, executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
The execution subject of the embodiment may be a self-moving device, and the self-moving device may receive the job instruction and execute the job task according to the job instruction. The job instruction of the present embodiment is used to instruct the self-moving device to execute a job task on the one hand. In the present embodiment, the specific form of receiving the job instruction from the mobile device is not limited, and the following description will be made by way of example. Optionally, the self-moving device is provided with a display screen supporting human-computer interaction, and a user can send a job instruction to the self-moving device through the display screen; or, the self-moving device may be bound with a terminal device used by the user, an application program corresponding to the self-moving device, such as an APP, is installed on the terminal device, and the user sends an operation instruction to the self-moving device through the APP; or, the self-moving device may also receive a job instruction sent from the server, where the job instruction may be sent to the self-moving device by another device through the server, or may be actively sent by the server; alternatively, the user may set a timed task in the self-moving device, where the timed task includes a timed time and a corresponding job instruction, and when the timed time arrives, the self-moving device may execute the job task according to the job instruction in the timed task.
In this embodiment, the self-mobile device supports multiple different operation modes at the same time, and the multiple operation modes are all in an enabled state. The job instruction of the embodiment may also directly or indirectly reflect a job mode that needs to be used by the self-moving device to execute the job task, that is, the self-moving device may determine a target job mode to use from a plurality of job modes according to the job instruction. In this embodiment, an implementation manner of determining the target job mode according to the job instruction from the mobile device is not limited, and for related implementation manners, reference may be made to the following embodiments shown in fig. 1b to 1d, which are not repeated herein. Wherein, different operation modes correspond to different region attributes, and different region attributes correspond to different regions; for convenience of description and distinction, the job mode that is determined to be used is referred to as a target job mode, and the area attribute corresponding to the target job mode is referred to as a target area attribute, that is, the target job mode corresponds to the target area attribute.
For example, the target operation mode may be a full-area operation mode or an area operation mode; correspondingly, the area attribute corresponding to the whole area operation mode may be a whole area attribute, and the area attribute corresponding to the area operation mode may be a local attribute, for example, a stain attribute, a tea table attribute, a table attribute, and the like. For the definition of the whole area operation mode and the corresponding whole area attribute, and the definition of the area operation mode and the corresponding local attribute, see above, which is not limited herein. For example, in a shopping mall environment, the property of the whole area corresponding to the whole area operation mode may represent all floors including various floors of a supermarket, an ornament, a garment, a shoe and hat, a beauty treatment, a catering, a movie city, and the like; the local attribute corresponding to the region operation mode may be a stain attribute, which indicates a stain region on one or more floors, such as a supermarket, an ornament, a garment, a shoe, a beauty, a restaurant, a movie city, and the like. Wherein one or more floors is a specific implementation of the first zone in the embodiments of the present application. For another example, in a home environment, the property of the whole area corresponding to the whole area operation mode may represent the whole area including each partition such as a bedroom, a living room, a kitchen, a bathroom, and a balcony; the local attribute corresponding to the area operation mode may be a stain attribute representing a stained area in one or more zones in a bedroom, living room, kitchen, bathroom, balcony, etc. Wherein one or more partitions is another specific implementation of the first area in the embodiments of the present application.
Further, in a case where the target job mode and the target area attribute corresponding thereto are determined, the self-moving apparatus may identify at least a partial area having the target area attribute in the first area as a second area where the job task is to be executed. The first area is a whole operation area which can execute operation tasks and is predetermined according to application scenes; the second area is an area actually required to execute the job task, and specifically may be a part or all of the first area. In this embodiment, if the target operation mode is the full-area operation mode, and the target area attribute is the full-area attribute, which means that the operation task needs to be executed on the entire first area, the first area may be used as a second area where the operation task is to be executed, in this case, the second area is the first area; if the target operation mode is the area operation mode, the target area attribute is the local attribute, and further the local attribute may be a dirty attribute, which means that the operation task needs to be performed on a dirty area in the first area, so that the dirty area in the first area may be identified as the second area to be used for performing the operation task, in which case the second area is a partial area in the first area.
After the second area is identified, the mobile device can move to the second area, and the work task is executed in the second area by adopting the work parameters matched with the second area. In this embodiment, the implementation of obtaining the operation parameters adapted to the second area is not limited, and the implementation of obtaining the operation parameters adapted to the second area may be different according to the difference of the second area, and the detailed implementation may refer to the following embodiments, which are not repeated herein.
In this embodiment, the self-moving device supports multiple operation modes, and different operation modes correspond to different region attributes, and different region attributes correspond to different regions, which means that the self-moving device can execute operation tasks for different regions according to different operation modes, and through rich and diverse operation modes, the self-moving device is facilitated to execute operation tasks more flexibly and pertinently, and the operation requirements of users are met. Furthermore, when the operation tasks are executed aiming at different areas, the operation parameters adaptive to the different areas can be adopted, so that the operation execution effect is ensured, the overall operation efficiency is improved, and the related resources required by the execution of the operation tasks are saved.
Fig. 1b is a schematic flowchart of another operation method provided in an exemplary embodiment of the present application, and as shown in fig. 1b, the method includes:
s1b, receiving a job instruction, wherein the job instruction comprises a mode identifier which indicates a job mode required to be used for executing a job task;
s2b, according to the mode identification contained in the operation instruction, taking the operation mode corresponding to the mode identification as a target operation mode; wherein, different operation modes correspond to different region attributes, and the target operation mode corresponds to the target region attribute;
s3b, identifying at least partial area with the target area attribute in the first area according to the target operation mode, and using the partial area as a second area of the operation task to be executed;
and S4b, executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
In this embodiment, a mode identifier may be configured for each operation mode, the mode identifiers corresponding to different operation modes are different, and the operation identifiers of different operation modes are known to the user. Based on the above, a mode identifier may be carried in the job instruction to indicate a job mode that needs to be used by the mobile device in executing the job task. Based on this, after receiving the job instruction, the self-mobile device may analyze the pattern identifier from the job instruction, and match the pattern identifier analyzed from the job instruction in the correspondence between the pattern identifier and the job pattern to obtain the job pattern corresponding to the pattern identifier, which is the target job pattern.
In an alternative embodiment, in the case where the automatic apparatus supports both the full-area job mode and the area job mode, the mode identifier included in the job instruction may be a mode identifier of the global job mode or a mode identifier of the area job mode. If the mode identifier in the operation instruction is the mode identifier of the whole-region operation mode, the self-mobile equipment can determine that the whole-region operation mode is the target operation mode; if the mode identifier in the operation command is the mode identifier of the area operation mode, the self-mobile device can determine that the area operation mode is the target operation mode.
For detailed descriptions of other steps in this embodiment, refer to other embodiments, which are not described herein.
Fig. 1c is a schematic flow chart of another operation method provided in an exemplary embodiment of the present application, and as shown in fig. 1c, the method includes:
s1c, receiving a working instruction;
s2c, acquiring a partial environment image in the first area according to the operation instruction, wherein the partial environment image reflects the overall pollution degree of the first area;
s3c, selecting an operation mode adaptive to the integral pollution degree as a target operation mode; the different operation modes correspond to different area attributes, and the target operation mode corresponds to the target area attribute;
s4c, identifying at least partial area with the target area attribute in the first area according to the target operation mode, and taking the partial area as a second area of the operation task to be executed;
and S5c, executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
In this embodiment, the self-moving device supports multiple operation modes, the selection of the operation mode is combined with the overall pollution degree of the first area, and the operation mode is automatically selected by the self-moving device. Specifically, in this embodiment, the job instruction may not include the mode identifier, and after receiving the job instruction, the mobile device may collect a partial environment image in the first area, and reflect the entire contamination degree of the first area through the partial environment image; further, a work mode adapted to the overall contamination level is selected from a plurality of work modes as a target work mode according to the overall contamination level of the first region.
In an alternative embodiment, the overall contamination level of the first area may be compared with a set contamination level threshold; when the overall contamination degree of the first area is greater than or equal to the set contamination degree threshold, determining that the overall contamination degree of the first area is relatively serious, and if an operation task needs to be executed on the first area, determining that the whole-area operation mode is a target operation mode; when the overall contamination degree of the first area is smaller than the set contamination degree threshold, it is indicated that the first area is overall cleaner, and the operation task does not need to be executed on the entire first area.
In an alternative embodiment, the entire contamination degree of the first region is reflected by the partial environment image, and specifically, the reference environment image corresponding to the contamination degree threshold value may be stored in advance by comparing the partial environment image with the reference environment image. For example, the degree of contamination of the partial environment image may be determined by comparing pixel values in the partial environment image with pixel values in the reference environment image, representing colors by identifying the pixel values; for another example, the reference environment image may be a reference value in an image recognition model, and the partial environment image is recognized based on the image recognition model to recognize the stain type, the stain profile, the area size, and the like in the image, so as to determine the stain degree of the partial environment image.
In the above or below embodiments of the present application, the manner of acquiring the partial environment image in the first region from the mobile device is not limited. In an optional embodiment, the self-mobile device may collect, according to the job instruction, an environment image in the surrounding environment area from the current location as a partial environment image of the first area, where the self-mobile device belongs to the first area from the current location. For example, if the self-mobile device is located in the first area and the current location is wide, or the environment image around the current location has a certain representativeness, the self-mobile device may collect the environment image in the field of view of the self-mobile device at the current location as a partial environment image of the first area.
In another alternative embodiment, the self-moving device may move from the current location to a first location according to the job instruction, and capture an environment image in the surrounding environment area at the first location as a partial environment image of the first area. For example, after the mobile device has finished performing the task last time, the mobile device usually returns to the charging seat or the base station to be charged, the charging seat or the base station is usually placed in an area near the side or in a corner, if the mobile device directly collects the surrounding environment image at this position, the overall contamination degree of the first area may not be well reflected, in order to collect a more representative environment image, the mobile device may move from the position where the charging seat or the base station is located (i.e., the current position) to the first position, and then collect the environment image at the first position. In this embodiment, since the location of the charging stand or the base station (from the initial position of the mobile device) is not representative, the overall contamination degree of the first area cannot be better reflected, and therefore, after the mobile device moves to the representative first location, the acquired environmental image can more accurately reflect the overall contamination degree of the first area, which is helpful for determining the target operation mode and improving the operation effect.
In this embodiment, the first position may be any position in the first area, or may be a designated position in the first area, which is not limited herein. For example, in the case where the first location is any location in the first area, the self-moving device may randomly rotate by a certain angle, and randomly move by a certain distance from the current location along the direction of rotation to reach the first location. In the case where the first position is a specified position in the first area, the self-moving apparatus may rotate by a certain angle toward a specified direction, move from a current position by a specified distance along the specified direction, and reach the first position. Optionally, in order to make the acquired partial environment image better reflect the overall contamination level of the first region, the first position may be set to be a more representative position in the first region. Taking a home environment as an example, the first position may be set to a certain position in a living room, or a certain position in a bedroom, or a certain position in a kitchen, etc.
For detailed description of other steps in this embodiment, reference may be made to other embodiments, which are not repeated herein.
In some embodiments of the present application, the self-moving device supports the whole area operation mode and the area operation mode, in this case, in addition to determining the target operation mode by the self-moving device as shown in fig. 1b and fig. 1c, the target operation mode may be determined by the method in the embodiment shown in fig. 1 d. Fig. 1d is a schematic flow chart of another operation method provided in an exemplary embodiment of the present application. As shown in fig. 1d, the method comprises:
s1d, receiving a working instruction;
s2d, calculating a time interval from the last time of executing the operation task in the whole-area operation mode according to the operation instruction;
s3d, if the time interval is larger than or equal to a set interval threshold, determining that the whole-area operation mode is a target operation mode; if the time interval is smaller than a set interval threshold, determining that the regional operation mode is the target operation mode;
s4d, identifying at least partial area with the target area attribute in the first area according to the target operation mode, and using the partial area as a second area of the operation task to be executed;
and S5d, executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
Optionally, taking a home environment as an example, the job task executed by the mobile device is a cleaning task, and the time interval is set to be 5 days; for example, the self-moving device calculates that the time interval from the last time that the whole-area operation mode is adopted to execute the operation task is 7 days according to the operation instruction, which indicates that the whole home environment is not cleaned for a period of time and the indoor pollution degree is possibly serious, and then the self-moving device can take the whole-area operation mode as the target operation mode to clean the whole home environment; for another example, if the self-moving device calculates the time interval from the last time that the work task is executed in the whole-area work mode to 2 days according to the work instruction, which indicates that the whole home environment is cleaned in the near future and the indoor contamination degree is not serious, the self-moving device may use the area work mode as the target work mode and only clean the easily-contaminated areas such as the kitchen and the toilet.
In the embodiment shown in fig. 1d, when the target operation mode is determined to be the area operation mode, and the area attribute corresponding to the area operation mode is the dirty attribute, the embodiment of determining the second area is not limited, and the following examples are given:
mode 1: the self-moving device can traverse the first area, acquire surrounding environment images in the traversing process, and identify a stain area as a target area according to the surrounding environment images. For example, the self-moving device may continuously acquire images of the surrounding environment in the traversal process, compare the images with the existing stain images in the image library, and determine that the region acquired with the stain image is the target region when the stain degree of the acquired image is similar or equal to the stain degree of the existing stain images in the image library. For another example, the self-moving device may continuously acquire an ambient image in the traversal process, and after each acquired ambient image, may compare the currently acquired ambient image with the historical ambient image acquired in the first region, and when it is determined that the currently acquired ambient image is a dirty image, take a region corresponding to the dirty image as the target region.
Mode 2: the method comprises the steps that the dirty area which is found is marked on an environment map, and based on the dirty area, the dirty area in the first area is identified as a second area to be used for executing a work task according to dirty mark information on the environment map by the self-mobile equipment, wherein the dirty mark information corresponds to the dirty area. For example, in the environment map, the kitchen and the toilet are marked as dirty areas, the self-moving device can take the kitchen and the toilet as target areas, and move to the kitchen and the toilet according to the environment map to execute a job task.
Mode 3: if the operation instruction includes the mode identifier and the mode identifier corresponds to the area operation mode, the self-mobile device may identify, according to the area identifier included in the operation instruction, a dirty area in the first area as a target area by combining the environment map, where the area identifier points to the dirty area. For example, in a home environment, where the area contained in the job instruction is identified as a kitchen, the self-moving device may target the kitchen and move to the kitchen to perform a mopping task.
In the above embodiment, the mode 1 and the mode 2 can be applied to any one of the embodiments shown in fig. 1a to 1d, and the mode 3 can be applied to the embodiment shown in fig. 1 b.
In the above or below embodiments of the present application, in a case where the second area is different, the job parameters adopted by the self-moving device when executing the job task may be different. Therefore, in the case of determining the target area, the self-moving apparatus can execute the job task within the target area using the job parameters adapted to the target area. In this embodiment, if the second area is the entire first area, a default third operation parameter may be adopted, and the third operation parameter is adopted to execute the operation task for the entire area; and if the second area is the dirty area, acquiring a first operation parameter matched with the dirty area, and executing an operation task in the dirty area according to the first operation parameter.
The method for acquiring the first operation parameter includes, but is not limited to, the following:
mode A: the self-moving equipment can acquire an environment image of a stain area before executing a task, record the environment image as a first environment image, and identify at least one of stain attribute, ground attribute and scene information in the stain area according to the first environment image; based on the at least one piece of information, a first operating parameter adapted to the stained area is determined. Wherein, the stain attribute includes but is not limited to information such as the type, the area size, the stubborn degree and the like of the stain; the ground attributes include but are not limited to information such as ground material, concave-convex degree, texture and the like; scene information includes, but is not limited to, objects, light, etc. contained in the area.
Mode B: marking a stain area on an environment map, namely marking stain marking information corresponding to the stain area on the environment map, simultaneously recording historical operation parameters used for executing an operation task on the stain area, establishing a corresponding relation between the historical operation parameters and the stain marking information, acquiring historical operation parameters corresponding to the stain marking information based on the historical operation parameters, and correcting a first operation parameter according to the historical operation parameters; the historical operation parameters are operation parameters used in the process of executing the historical operation tasks on the dirty area. Alternatively, the self-moving device may directly take a certain historical job parameter as the first job parameter; or taking the average value of the historical operation parameters in the specified historical time period as the first operation parameter; the currently acquired environment image can be compared with the environment image acquired at the historical moment, the historical operation parameter is corrected according to the comparison result of the stain and dirt degree in the environment image, and the corrected operation parameter is used as the first operation parameter and the like.
Mode C: the self-mobile equipment can acquire an environment image of a stain area before executing a task, record the environment image as a first environment image, identify at least one of stain attribute, ground attribute and scene information in the stain area according to the first environment image, determine a first operation parameter matched with the stain area, and execute the task in the stain area according to the first operation parameter. Further optionally, when the first operation parameter does not meet the operation effect requirement, a historical operation parameter corresponding to the dirt marking information may be acquired, the first operation parameter is corrected according to the historical operation parameter, and the corrected operation parameter is used as the first operation parameter. The embodiments of the present application are not limited to specific implementation manners.
The specific contents of the first operation parameter and the third operation parameter may be different according to different mobile devices and application scenarios. Taking the self-moving device as a floor mopping robot or a sweeping and mopping integrated robot as an example, the work task that can be executed by the floor mopping robot or the sweeping and mopping integrated robot is a cleaning task, and correspondingly, the first work parameter or the third work parameter may include at least one of a type of detergent, an amount of detergent used, a cleaning duration, a cleaning strength, and a cleaning frequency. Correspondingly, the mopping robot or the sweeping and mopping integrated robot determines at least one of the type of the cleaning agent, the using amount of the cleaning agent, the cleaning time, the cleaning force and the cleaning frequency according to at least one of the stain attribute, the ground attribute and the scene information in the identified stain area, and the information is used as a first operation parameter matched with the stain area. The first operation parameter matched with the dirt area can be determined only according to one, any two or three kinds of information of the dirt attribute, the ground attribute and the scene information. The following description will take the example of determining the first operation parameter by simultaneously combining three kinds of information, namely the stain attribute, the ground attribute and the scene information:
optionally, if the stain attribute represents stubborn stains, the ground attribute represents that the ground is a perishable material, and the scene information represents a non-greasy dirt scene, determining that the type of the cleaning agent is water, the using amount of the cleaning agent is a first amount, the cleaning force is a first force, the cleaning duration is a first duration, and the cleaning times is a first number, so as to obtain a first operation parameter; if the stain attribute represents non-stubborn stains, the ground attribute is a non-perishable material, and the scene information represents an oil stain scene, determining that the type of the cleaning agent is a cleaning agent, the using amount of the cleaning agent is a second amount, the cleaning force is a second force, the cleaning duration is a second duration, and the cleaning times are second times, so as to obtain a first operation parameter; the first dosage is less than the second dosage, the first force is less than the second force, the first duration is less than the second duration, and the first time is less than the second time.
For example, in a home environment, a dining table is usually placed in a living room, and dropped rice grains may stick to a floor, and when dust adheres thereto, stubborn stains are easily formed. When the information such as the stain attribute and the ground attribute in the scene is recognized by the self-mobile equipment, the stain can be cleaned by using a certain amount of clear water. In addition, in a kitchen, for example, oil stains may fall onto tiles due to a long-term frying operation, and the stains are more difficult to remove particularly in gaps between the tiles. When the information such as stain attribute and ground attribute in the scene is identified by the self-moving equipment, the oil stain and the oil stain can be cleaned by adopting a certain amount of cleaning agent. In this example, the stains adhered to the floor can be removed by wiping with clean water several times, but the stubborn stains on the tiles can be removed by repeatedly cleaning the stubborn stains with a larger cleaning dose and a larger cleaning force. Therefore, aiming at the stain cleaning process in different scenes, the self-moving equipment can adaptively adjust the dosage, cleaning force and cleaning frequency of clear water or cleaning agent according to the stubborn degree of the stains and the material of the ground, so that the cleaning effect is more obvious.
Further optionally, in some optional embodiments of the present application, a work method is further provided, which is adapted to perform a work task in a stained area from a mobile device. Fig. 2 is a flowchart of the operation method, and as shown in fig. 2, the operation method includes the following steps:
p1, under the condition that the region operation instruction is received, identifying a dirty region existing in a first region;
p2, identifying at least two kinds of information in the stain attribute, the ground attribute and the scene information in the stain area according to the environment image of the stain area;
p3, determining a first operation parameter matched with the stain area according to at least two kinds of information;
p4, executing the operation task in the dirty area by adopting the first operation parameter, and acquiring a second environment image of the dirty area after executing the operation task;
p5, when the dirty area is determined to not reach the set operation effect according to the second environment image, the dirty degree of the dirty area is identified according to the second environment image;
and P6, re-determining the second operation parameter according to the dirt degree of the dirt area, and executing the operation task in the dirt area again by adopting the second operation parameter.
In the embodiment, in order to enable the operation task to achieve an ideal operation effect, after the operation task is executed in the dirty area according to the first operation parameter, the mobile device may further acquire a second environment image of the dirty area; if the dirty area does not reach the set operation effect according to the second environment image, identifying the dirty degree of the dirty area according to the second environment image; and re-determining the second operation parameter according to the dirt degree of the dirt area, and executing the operation task in the dirt area again by adopting the second operation parameter. Optionally, the self-moving device may compare the collected second environment image with the first environment image, and determine whether a difference between the degrees of contamination of the collected second environment image and the first environment image is greater than a first preset threshold; if so, determining that the set operation effect is achieved; if not, the set operation effect is determined not to be achieved. Or the self-mobile equipment can compare the collected second environment image with a preset environment image and judge whether the difference value of the contamination degrees of the collected second environment image and the preset environment image is smaller than a second preset threshold value or not; if yes, determining that the set operation effect is achieved; if not, the set operation effect is determined not to be achieved.
In the embodiment of the present application, the manner of acquiring the second environment image from the mobile device is not limited, and the corresponding acquisition manner may be different according to the form difference of the mobile device. Optionally, if the self-moving device is provided with a front camera and a rear camera, in the advancing process, the front camera is used for collecting a first environment image in the advancing direction in the advancing process, whether a corresponding area is a dirty area or not is judged according to the collected first environment image, and if the area is determined to be the dirty area, an operation task can be executed on the area corresponding to the first environment image; after the operation task is finished, the self-moving equipment moves to the front of the dirty area, at the moment, the field angle of the rear camera covers the dirty area, and then the rear camera is used for collecting a second environment image of the dirty area. Optionally, if the second environment image does not completely cover the dirty area, the self-moving device may slightly move in different directions to collect a plurality of second environment images, so as to determine the operation effect according to the plurality of second environment images.
For example, the self-moving device may repeatedly execute the job task in the dirty area according to the set number of times of the job, and in a case where the job task is executed according to the set number of times of the job, the self-moving device may continue to move forward according to the existing environment map or the collected environment image. In the process of continuously moving forwards, the self-moving equipment can move to the front of the dirty area, at the moment, the rear camera can collect a second environment image of the dirty area where the operation task is executed, and whether the dirty area achieves the set operation effect or not is judged according to the second environment image. If the preset operation effect is judged not to be achieved, continuing to execute the operation task on the dirty area; if the preset operation effect is reached, the mobile terminal continues to move forwards.
In another optional embodiment, if the self-mobile device is only provided with the front camera in the advancing process, the front camera is used for collecting a first environment image in the advancing direction in the advancing process, whether a corresponding area is a dirty area or not is judged according to the collected first environment image, and if the corresponding area is determined to be the dirty area, an operation task can be executed on the area corresponding to the first environment image; after the operation task is finished, the self-moving equipment moves to the front of the dirty area, and at the moment, in order to determine the operation effect of the dirty area, the self-moving equipment can be controlled to rotate by 180 degrees. Enabling the field angle of the front-facing camera to cover the position of the dirty area, and collecting a second environment image of the dirty area by using the front-facing camera.
For example, the self-moving device may repeatedly execute the job task in the dirty area according to the set number of jobs, and in a case where the job task is executed according to the set number of jobs, the self-moving device may continue to move forward according to the existing environment map or the collected environment image. And in the process of continuously moving the self-moving equipment forwards, the self-moving equipment moves to the front of the dirty area, at the moment, the field angle of the front camera cannot cover the position of the dirty area, the self-moving equipment is controlled to rotate 180 degrees, namely the head is turned over, a second environment image is acquired for the dirty area of the executed operation task by using the front camera, and whether the dirty area achieves the set operation effect or not is judged according to the second environment image. If the preset operation effect is judged not to be achieved, continuing to execute the operation task on the dirty area, and continuing to move towards the target direction under the condition that the preset operation effect is achieved; if the set operation effect is judged to be achieved, the mobile equipment rotates 180 degrees, namely, the mobile equipment continues to move towards the direction before turning around.
In another optional embodiment, when a rotating camera is installed at the top of the mobile device, the mobile device may collect a first environment image in a traveling direction during the traveling process by using the rotating camera, and determine whether a corresponding area is a dirty area according to the collected first environment image, and if the area is determined to be the dirty area, may execute an operation task on the area corresponding to the first environment image; after the operation task is finished, the self-moving equipment moves to the front of the dirty area, and at the moment, in order to determine the operation effect of the dirty area, the rotating camera can be controlled to rotate so that the view angle of the rotating camera covers the position of the dirty area, and the rotating camera is used for collecting a second environment image of the dirty area.
For example, the self-moving device may repeatedly execute the job task in the dirty area according to the set number of times of the job, and in a case where the job task is executed according to the set number of times of the job, the self-moving device may continue to move forward according to the existing environment map or the collected environment image. And in the process of continuously moving the self-moving equipment forwards, the self-moving equipment can move to the front of the dirty area, at the moment, the rotary camera can be controlled to rotate in a 360-degree rotation mode, so that the field angle of the rotary camera covers the position of the dirty area, a second environment image of the dirty area is collected, and whether the dirty area achieves the set operation effect or not is judged according to the second environment image. If the preset operation effect is judged not to be achieved, continuing to execute the operation task on the dirty area; if the set operation effect is reached, the mobile terminal continues to move forward.
In the embodiment of the application, the self-mobile equipment identifies the dirt degree of the dirt area according to the collected second environment image; and if the dirty area is determined to not reach the set operation effect according to the dirty degree of the dirty area, re-determining the second operation parameter according to the dirty degree of the dirty area, and executing the operation task again in the dirty area by adopting the second operation parameter. In this embodiment, the second operation parameter is not limited, and may be the same as or different from the first operation parameter. Further, when the second operation parameter is different from the first operation parameter, the types of the parameters may be different, values of the parameters may be different, or both the types and the values of the parameters may be different. For example, for stubborn oil stains, the first operation parameter may be 10 ml of the cleaning agent, 100 pa of operation force and 10 times of operation; after the first operation, the oil stain is obviously reduced, and during the second operation, the cleaning agent can be changed into clear water, or the operation force is reduced, or the operation times are reduced, or the parameters are adjusted or not adjusted, and the operation effect can be determined specifically.
In the embodiment of the present application, a manner of re-determining the second operation parameter according to the degree of contamination of the stained area from the mobile device is also not limited. Optionally, if the contamination degree of the contamination area is greater than the set contamination degree level, increasing the value of each numerical parameter in the first operation parameter to obtain a second operation parameter. For example, the first operation parameter is 10 ml of the cleaning agent, the operation force is 100 pa, and the operation frequency is 10 times, after the first operation, the dirt degree of the dirt area is still larger than the set dirt degree grade, which indicates that the stubborn dirt cannot be removed by using the first operation parameter, and the operation requirement cannot be met. In this case, the value of each numerical parameter in the first operation parameter may be increased. Optionally, the first parameter may be adjusted to 15 ml of the cleaning agent, the working force is 200 pa, and the working times are 20 times, which are used as the second working parameter, and the working task is continuously performed on the stained area according to the second working parameter, so as to enhance the working effect of the stained area. Correspondingly, if the dirt degree of the dirt area is smaller than the set dirt degree grade, the values of the numerical parameters in the first operation parameter can be reduced to obtain a second operation parameter, and the second operation parameter is used for continuously executing the operation task on the dirt area.
Further optionally, in order to reduce unnecessary resource consumption and improve job execution efficiency, in the above method step P4, before acquiring the second environment image for the dirty area, it may be determined whether there is a need to acquire the second environment image for the dirty area. Alternatively, as shown in fig. 2b, the step P4 may include:
p4a, executing an operation task in the dirty area by adopting a first operation parameter;
p4b, judging whether the stain area belongs to a specific area or not, wherein the specific area comprises an area with the stain occurrence frequency larger than a set frequency threshold or an area belonging to a specific scene;
p4c, collecting a second environment image of the stain area;
in P4b, if the determination result is yes, performing step P4c and subsequent steps; if not, the job task is ended. In the present embodiment, the specific area includes an area where the frequency of occurrence of stains is greater than a set frequency threshold or an area belonging to a specific scene, for example, an area such as a kitchen, a bathroom, a hallway, and the like in a home environment, and stains are more easily attached than an area such as a living room, a bedroom, a balcony, and the like. If the specific area is judged, the operation of collecting the second environment image of the dirty area and the subsequent operation are executed, so that the frequency of frequently collecting the environment image can be reduced, the resource waste is reduced, especially for the self-moving equipment with only the front camera, the frequency of frequent rotation can be reduced, and the operation execution efficiency is improved.
In an optional embodiment of the application, in order to facilitate targeted operation on a dirty area, under the condition that the target area is the dirty area, whether dirty marking information corresponding to the dirty area is included in an environment map or not can be judged; if the information is not included, adding stain marking information corresponding to the stain area in the environment map so as to enable the self-mobile device to execute the operation task on the stain area according to the stain marking information when the subsequent operation task is executed, or enabling the self-mobile device to execute the operation task on the stain area according to the stain marking information. Further, the number of times of executed work tasks of the dirty area after marking and the work result of each work task can be recorded according to the dirty marking information corresponding to the dirty area on the environment map; and if the operation results of the times all reach the set conditions, deleting the dirt marking information corresponding to the dirt area. For example, in a home environment, stains are likely to be caused in a specific area such as a kitchen and a toilet, or in an area where a dining table is located in a living room. In order to ensure the operation effect of the dirty area, one operation frequency can be set to measure the operation effect, and the probability of omitting operation on the dirty area is reduced. For example, 2 operations are set for the dirty area, and after the mobile device executes the 2 operations on the dirty area, and each operation effect reaches the set operation effect, the dirty mark information is deleted from the environment map.
In addition, in order to improve user experience and reduce resource waste, the mobile device can execute the operation task according to the operation parameters aiming at the dirty area in the process of executing the operation task; aiming at the non-dirty area, the self-moving equipment can improve the moving speed so as to realize quick operation. Further optionally, in the case that the stain attribute, the ground attribute, and the environment information cannot be identified, the self-moving device may execute the job task according to default job parameters, or send a prompt to the user, so that the user may set the job parameters according to the stain attribute, the ground attribute, and the environment information.
The application process of the present application will be exemplarily described below by taking a working environment as a home environment, a self-moving device as a cleaning robot, and an executed working task as a sweeping task.
Scene 1:
the whole indoor ground is more in stain and floating dust, a user can firstly send a cleaning instruction to the cleaning robot, and the cleaning robot can clean the whole house and clean the floating dust on the ground after receiving the instruction. At this time, in order to remove stubborn stains in a partial region on the ground, the user may issue a local mopping instruction to the cleaning robot and indicate a region where the stubborn stains are located. After receiving the mopping instruction, the cleaning robot can move to an area where stubborn stains are located, traverse the stain area, collect ground information, determine the stain attributes, the ground attributes, the environment information and other contents of the stain area, and clean the stains by adopting the adaptive type and amount of the cleaning agent, the cleaning force and the cleaning duration.
Scene 2:
after the user starts the cleaning robot, only a normal work starting instruction is issued to the cleaning robot, and a cleaning mode and a cleaning area are not specified. The cleaning machine can collect partial environment images within 1 square meter of the current position or within 1 square meter of the central position of the living room, and a whole-area cleaning mode or an area cleaning mode is determined according to the dirt degree of the dirt in the environment images. If the collected environment image is very serious in stain and dirt degree, the cleaning robot determines to adopt a whole-area cleaning mode, traverses the whole house from the current position and collects the environment image. When the current position is determined to be the stain area according to the collected environment image, the cleaning robot cleans the stain by adopting the type and the amount of the adaptive cleaning agent, the cleaning force and the cleaning duration according to the stain attribute, the ground attribute, the environment information and other contents of the stain area. And if the current position is determined not to be the stain area according to the acquired environment image, cleaning is not carried out, the movement is continued, and the environment image is acquired until the whole house traversal is finished.
Scene 3:
after a user starts the cleaning robot, only a normal work starting instruction is issued to the cleaning robot, and a cleaning mode and a cleaning area are not specified. According to the existing environment map, the cleaning robot can move to a stain area corresponding to the stain area marking information according to the stain area marking information in the environment map, traverse the stain area and acquire an environment image. When the current position is determined to be the stain area according to the collected environment image, the cleaning robot cleans the stain by adopting the type and the amount of the adaptive cleaning agent, the cleaning force and the cleaning duration according to the stain attribute, the ground attribute, the environment information and other contents of the stain area. And if the current position is determined not to be the stain area according to the acquired environment image, cleaning is not carried out, the movement is continued, and the environment image is acquired until the stain area is traversed and ended.
It should be noted that the several scenario examples are only exemplary, and are not exhaustive of the embodiments in the present application, and for specific implementation processes of other alternative modes, reference may be made to the above descriptions. Further, the embodiment of the present application also does not limit the operation manner of the self-moving device and the structure of the self-moving device to which the operation method of the present application is applied, for example, in the above scenario example, the cleaning robot may be a sweeper, a mopping machine, a sweeping and mopping all-in-one machine, a scrubber, or the like, and all devices to which the operation method of the present application can be applied are applicable to the embodiment of the present application.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subject of steps S1a to S4a may be device a; for another example, the executing agent of steps S1a and S2a may be device a, and the executing agent of steps S3a and S4a may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that these operations may be executed out of the order they appear herein or in parallel, and the order of the operations such as S1a, S2a, etc. is merely used to distinguish between the various operations, and the order itself does not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
An embodiment of the present application further provides a self-moving device, and fig. 3a is a schematic structural diagram of the self-moving device according to the embodiment of the present application, and as shown in fig. 3a, the self-moving device includes: a processor 31 and a memory 32 in which a computer program is stored; the processor 31 and the memory 32 may be one or more.
The memory 32 is mainly used for storing computer programs, and these computer programs can be executed by the processor 31, so that the processor 31 can control the mobile device to realize corresponding functions and complete corresponding actions or tasks. In addition to storing computer programs, the memory 32 may be configured to store other various data to support operations on the mobile device. Examples of such data include instructions for any application or method operating on the self-mobile device.
The memory 32, may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In the embodiment of the present application, the implementation form of the processor 31 is not limited, and may be, for example, but not limited to, a CPU, a GPU, an MCU, or the like. The processor 31 may be regarded as a control system of the self-moving device and may be configured to execute a computer program stored in the memory 32 to control the self-moving device to implement corresponding functions and perform corresponding actions or tasks. It should be noted that, according to the implementation form and the scene of the mobile device, the functions, actions or tasks to be implemented may be different; accordingly, the computer programs stored in the memory 32 may vary, and execution of the computer programs by the processor 31 may control the mobile device to perform different functions, perform different actions or tasks.
In an alternative embodiment of the present application, the autonomous mobile device may include a device body, and optionally, the processor 31 and the memory 32 may be provided on the device body, and the device body is an execution mechanism of the autonomous mobile device, and may execute an operation designated by the processor 31 in a certain environment. The device body embodies the appearance of the autonomous mobile device to a certain extent. In the present embodiment, the appearance of the autonomous mobile apparatus is not limited. Of course, the shape of the autonomous mobile device may vary depending on the implementation of the autonomous mobile device. Taking the outer contour shape of the autonomous moving apparatus as an example, the outer contour shape of the autonomous moving apparatus may be an irregular shape or some regular shapes. For example, the outline shape of the autonomous mobile apparatus may be a regular shape such as a circle, an ellipse, a square, a triangle, a drop, or a D-shape. The irregular shape other than the regular shape is called, and for example, an outer contour of a humanoid robot, an outer contour of an unmanned vehicle, or the like belongs to the irregular shape.
In some alternative embodiments, the self-moving device may also include other components such as a display 33, a power component 34, and a communications component 35. Only some of the components are schematically shown in fig. 3a, which does not mean that the self-moving device only includes the components shown in fig. 3a, and the self-moving device may further include other components for different application requirements, for example, in the case that there is a requirement for voice interaction, as shown in fig. 3, the self-moving device may further include an audio component 36. Further, as shown in fig. 3b to fig. 3d, taking the self-moving device as an example that the outer contour shape is a circle, the self-moving device may further include a camera for acquiring an environment image, and optionally, may include only a front camera, or both a front camera and a rear camera, or include a rotating camera. The components that can be included in the self-moving device may be determined by the product form of the self-moving device, and are not limited herein.
In the embodiment of the present application, when the processor 31 executes the computer program in the memory 32, it is configured to: receiving an operation instruction; determining a target operation mode according to the operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to a target region attribute; according to the target operation mode, at least partial area with the target area attribute in the first area is identified as a second area to be used for executing an operation task; and executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
In an alternative embodiment, when determining the target job mode according to the job instruction, the processor 31 is configured to: according to a mode identifier contained in the operation instruction, taking an operation mode corresponding to the mode identifier as the target operation mode; or acquiring a partial environment image in the first area according to the operation instruction, wherein the partial environment image reflects the overall pollution degree of the first area; selecting an operation mode adapted to the overall degree of contamination as the target operation mode.
In an alternative embodiment, when the processor 31 acquires the partial environment image in the first area according to the job instruction, it is configured to: acquiring an environment image in a surrounding environment area from the current position as the partial environment image according to the operation instruction; or moving the current position to a first position according to the operation instruction, and acquiring an environment image in a surrounding environment area at the first position as the partial environment image.
In an optional embodiment, when the processor 31 identifies at least a partial area having the target area attribute in the first area according to the target job mode, and the partial area is used as a second area to be executed with a job task, it is configured to: if the target operation mode is a whole-region operation mode and the target region attribute is a whole-region attribute, taking the first region as the second region; and if the target operation mode is an area operation mode and the target area attribute is a dirty attribute, identifying a dirty area in the first area as the second area.
In an alternative embodiment, when determining the target job mode according to the job instruction, the processor 31 is configured to: according to the operation instruction, calculating the time interval from the last time that the whole-area operation mode is adopted to execute the operation task; if the time interval is greater than or equal to a set interval threshold, determining that the whole-region operation mode is the target operation mode; and if the time interval is smaller than a set interval threshold, determining that the regional operation mode is the target operation mode.
In an alternative embodiment, the processor 31, when identifying the stained area in the first area as the second area, is configured to: traversing the first area, acquiring a surrounding environment image in the traversing process, and identifying a stain area as the second area according to the surrounding environment image; or under the condition that the operation instruction comprises a mode identifier and the mode identifier corresponds to an area operation mode, identifying a dirty area in the first area as the target area by combining an environment map according to the area identifier contained in the operation instruction, wherein the area identifier points to the dirty area; or identifying a dirty area in the first area as the second area according to dirty mark information on the environment map, wherein the dirty mark information corresponds to the dirty area.
In an optional embodiment, the processor 31 employs the job parameter adapted to the second area, and when executing the job task in the second area, is configured to: if the second area is a dirty area, identifying at least one of dirty attribute, ground attribute and scene information in the dirty area according to a first environment image of the dirty area collected before an operation task is executed; determining a first operation parameter adapted to the stain area according to the at least one piece of information; and executing a job task in the dirty area according to the first job parameter.
In an alternative embodiment, the processor 31 is further configured to: if the stain marking information corresponding to the stain area is included on the environment map, acquiring historical operation parameters corresponding to the stain marking information, and correcting the first operation parameters according to the historical operation parameters; wherein the historical job parameters are job parameters used in performing historical job tasks on the soiled area.
In an optional embodiment, the job task is a cleaning task, and the first job parameter includes at least one of a type of detergent, an amount of detergent, a length of cleaning time, a cleaning force, and a number of cleaning times; accordingly, the processor 31, when generating the first operation parameter adapted to the stained area according to the at least one information, is configured to: if the stain attribute represents stubborn stains, the ground attribute represents that the ground is made of perishable materials, and the scene information represents a non-oil pollution scene, determining that the type of the cleaning agent is water, the using amount of the cleaning agent is a first amount, the cleaning force is a first force, the cleaning duration is a first duration, and the cleaning times is a first number, so as to obtain a first operation parameter; if the stain attribute represents non-stubborn stains, the ground attribute is a non-perishable material, and the scene information represents an oil stain scene, determining that the type of the cleaning agent is a cleaning agent, the using amount of the cleaning agent is a second amount, the cleaning force is a second force, the cleaning duration is a second duration, and the cleaning times are second times, so as to obtain a first operation parameter; the first amount is smaller than the second amount, the first force is smaller than the second force, the first duration is smaller than the second duration, and the first times is smaller than the second times.
In an alternative embodiment, the processor 31 is further configured to: acquiring a second environment image of the dirty area after executing a job task in the dirty area according to the first job parameter; if the dirty area does not reach the set operation effect according to the second environment image, identifying the dirty degree of the dirty area according to the second environment image; re-determining a second operation parameter according to the dirt degree of the dirt area; and executing the operation task in the dirty area again by adopting the second operation parameter.
In an alternative embodiment, the processor 31, when acquiring the second environment image of the stained area, is configured to: if the self-moving equipment is provided with a rear camera and the field angle of the rear camera covers the dirty area, acquiring a second environment image of the dirty area by using the rear camera; or if the self-moving equipment is provided with a front camera, controlling the self-moving equipment to rotate to a position where the field angle of the front camera covers the dirty area, and acquiring a second environment image of the dirty area by using the front camera; or, under the condition that a rotating camera is installed at the top of the self-moving device, the rotating camera is controlled to rotate to a position where the field angle of the rotating camera covers the dirty area, and a second environment image of the dirty area is collected by the rotating camera.
In an alternative embodiment, the processor 31, when re-determining the second operation parameter according to the degree of soiling of the soiled area, is configured to: and if the dirt degree of the dirt area is larger than the set dirt degree grade, increasing the value of each numerical parameter in the first operation parameter to obtain the second operation parameter.
In an alternative embodiment, the processor 31 is further configured to, before acquiring the second environment image of the stained area: judging whether the dirt area belongs to a specific area or not, wherein the specific area comprises an area with the dirt occurrence frequency larger than a set frequency threshold or an area belonging to a specific scene; and if so, executing the operation of acquiring the second environment image of the dirty area and the subsequent operation.
In an alternative embodiment, the processor 31 is further configured to: under the condition that the target area is a dirty area, judging whether an environment map already contains dirty mark information corresponding to the dirty area; if not, adding stain marking information corresponding to the stain area in the environment map; recording the number of times of executed work tasks of the dirty area after marking and the work result of each work task according to the dirty mark information corresponding to the dirty area on the environment map; and if the times and the operation result both reach set conditions, deleting the stain marking information corresponding to the stain area.
In an optional embodiment, when the mobile device executes a job task in the dirty area, the processor 31, in case of receiving the area job instruction, is further configured to: identifying a stained area present in the first area; identifying at least two kinds of information in stain attributes, ground attributes and scene information in the stain area according to the environment image of the stain area; determining an operation parameter matched with the stain area according to the at least two kinds of information; and executing the operation task in the dirty area by adopting the operation parameters.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the self-moving device in the foregoing method embodiments when executed.
The communication component in the above embodiments is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as WiFi, a mobile communication network such as 2G, 3G, 4G/LTE, 5G, or the like, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display in the above embodiments includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly of the above embodiments provides power to various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component in the above embodiments may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (17)

1. An operation method applicable to self-moving equipment, the method comprising:
receiving an operation instruction;
determining a target operation mode according to the operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to a target region attribute;
according to the target operation mode, at least partial area with the target area attribute in the first area is identified as a second area to be used for executing an operation task;
and executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
2. The method of claim 1, wherein determining a target job mode based on the job instruction comprises:
according to a mode identifier contained in the operation instruction, taking an operation mode corresponding to the mode identifier as the target operation mode;
or
Acquiring a partial environment image in a first area according to the operation instruction, wherein the partial environment image reflects the overall pollution degree of the first area; selecting a work mode adapted to the overall degree of contamination as the target work mode.
3. The method of claim 2, wherein capturing a partial environmental image in the first area according to the job instruction comprises:
acquiring an environment image in a surrounding environment area from the current position as the partial environment image according to the operation instruction;
or
And moving to a first position from the current position according to the operation instruction, and acquiring an environmental image in the surrounding environmental area at the first position as the partial environmental image.
4. The method according to any one of claims 1 to 3, wherein identifying at least a partial area having the target area attribute in the first area as the second area for the job task to be performed according to the target job mode comprises:
if the target operation mode is a whole-region operation mode and the target region attribute is a whole-region attribute, taking the first region as the second region;
and if the target operation mode is an area operation mode and the target area attribute is a dirty attribute, identifying a dirty area in the first area as the second area.
5. The method of claim 4, wherein determining a target job mode based on the job instruction comprises:
according to the operation instruction, calculating the time interval from the last time that the whole-area operation mode is adopted to execute the operation task;
if the time interval is greater than or equal to a set interval threshold, determining that the whole-region operation mode is the target operation mode;
and if the time interval is smaller than a set interval threshold, determining that the regional operation mode is the target operation mode.
6. The method of claim 4, wherein identifying a stained area in the first area as the second area comprises:
traversing the first area, acquiring a surrounding environment image in the traversing process, and identifying a stain area as the second area according to the surrounding environment image;
or
Under the condition that the operation instruction comprises a mode identifier and the mode identifier corresponds to an area operation mode, identifying a dirty area in the first area as the target area by combining an environment map according to an area identifier contained in the operation instruction, wherein the area identifier points to the dirty area;
or alternatively
And identifying a dirty area in the first area as the second area according to dirty marking information on the environment map, wherein the dirty marking information corresponds to the dirty area.
7. The method of claim 6, wherein performing a job task within the second area using job parameters adapted to the second area comprises:
if the second area is a dirty area, identifying at least one of dirty attribute, ground attribute and scene information in the dirty area according to a first environment image of the dirty area collected before executing an operation task; determining a first operation parameter adapted to the stained area according to the at least one type of information; and executing a work task in the dirty area according to the first work parameter.
8. The method of claim 7, further comprising:
if the environment map comprises stain marking information corresponding to the stain area, acquiring historical operation parameters corresponding to the stain marking information, and correcting the first operation parameters according to the historical operation parameters; wherein the historical job parameters are job parameters used in performing historical job tasks on the soiled area.
9. The method of claim 8, wherein the job task is a cleaning task, and the first job parameter comprises at least one of a type of cleaning agent, an amount of cleaning agent used, a length of cleaning time, a cleaning effort, and a number of cleaning times;
correspondingly, according to the at least one type of information, generating a first operation parameter adapted to the stained area includes:
if the stain attribute represents stubborn stains, the ground attribute represents that the ground is made of perishable materials, and the scene information represents a non-oil pollution scene, determining that the type of the cleaning agent is water, the using amount of the cleaning agent is a first amount, the cleaning force is a first force, the cleaning duration is a first duration, and the cleaning times is a first number, so as to obtain a first operation parameter;
if the stain attribute represents non-stubborn stains, the ground attribute is a non-perishable material, and the scene information represents an oil stain scene, determining that the type of the cleaning agent is a cleaning agent, the using amount of the cleaning agent is a second amount, the cleaning force is a second force, the cleaning duration is a second duration, and the cleaning times are second times, so as to obtain a first operation parameter;
the first amount is smaller than the second amount, the first force is smaller than the second force, the first duration is smaller than the second duration, and the first times is smaller than the second times.
10. The method of claim 7, further comprising:
acquiring a second environment image of the dirty area after executing a job task in the dirty area according to the first job parameter;
if the dirty area does not reach the set operation effect according to the second environment image, identifying the dirty degree of the dirty area according to the second environment image;
re-determining a second operation parameter according to the dirt degree of the dirt area; and executing the operation task in the dirty area again by adopting the second operation parameter.
11. The method of claim 10, wherein acquiring a second environmental image of the stained area comprises:
if the self-moving equipment is provided with a rear camera and the field angle of the rear camera covers the dirty area, acquiring a second environment image of the dirty area by using the rear camera;
or,
if the self-moving equipment is provided with a front camera, controlling the self-moving equipment to rotate to a position where the field angle of the front camera covers the dirty area, and acquiring a second environment image of the dirty area by using the front camera;
or,
and under the condition that a rotary camera is installed at the top of the self-moving equipment, controlling the rotary camera to rotate to a position where the field angle of the rotary camera covers the dirty area, and acquiring a second environment image of the dirty area by using the rotary camera.
12. The method of claim 10, wherein re-determining a second operating parameter based on the degree of soiling of the stained area comprises:
and if the dirt degree of the dirt area is larger than the set dirt degree grade, increasing the value of each numerical parameter in the first operation parameter to obtain the second operation parameter.
13. The method of claim 10, further comprising, prior to acquiring the second environmental image of the stained area:
judging whether the stain area belongs to a specific area or not, wherein the specific area comprises an area with stain occurrence frequency larger than a set frequency threshold or an area belonging to a specific scene;
and if so, executing the operation of acquiring the second environment image of the dirty area and the subsequent operation.
14. The method of claim 7, further comprising:
under the condition that the target area is a dirty area, judging whether the environment map already contains dirty mark information corresponding to the dirty area;
if not, adding stain marking information corresponding to the stain area in the environment map; and
recording the times of executing the operation tasks of the dirt area after marking and the operation result of each operation task aiming at the dirt marking information corresponding to the dirt area on the environment map;
and if the times and the operation result both reach set conditions, deleting the stain mark information corresponding to the stain area.
15. An operation method applicable to self-moving equipment, the method comprising:
identifying a dirty area existing in the first area in case of receiving an area operation instruction;
identifying at least two kinds of information in stain attributes, ground attributes and scene information in the stain area according to the environment image of the stain area;
determining an operation parameter matched with the stain area according to the at least two kinds of information;
and executing the operation task in the dirty area by adopting the operation parameters.
16. An autonomous mobile device, comprising: the device comprises a device body, a control unit and a display unit, wherein the device body is provided with a processor and a memory for storing a computer program;
the processor to execute the computer program to:
receiving a working instruction;
determining a target operation mode according to the operation instruction, wherein different operation modes correspond to different region attributes, and the target operation mode corresponds to the target region attributes;
according to the target operation mode, at least partial area with the target area attribute in the first area is identified as a second area to be used for executing an operation task;
and executing the operation task in the second area by adopting the operation parameters adaptive to the second area.
17. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 15.
CN202110648198.4A 2021-06-10 2021-06-10 Operation method, self-moving device and storage medium Pending CN115469648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110648198.4A CN115469648A (en) 2021-06-10 2021-06-10 Operation method, self-moving device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110648198.4A CN115469648A (en) 2021-06-10 2021-06-10 Operation method, self-moving device and storage medium

Publications (1)

Publication Number Publication Date
CN115469648A true CN115469648A (en) 2022-12-13

Family

ID=84363337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110648198.4A Pending CN115469648A (en) 2021-06-10 2021-06-10 Operation method, self-moving device and storage medium

Country Status (1)

Country Link
CN (1) CN115469648A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116300974A (en) * 2023-05-18 2023-06-23 科沃斯家用机器人有限公司 Operation planning, partitioning, operation method, autonomous mobile device and cleaning robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116300974A (en) * 2023-05-18 2023-06-23 科沃斯家用机器人有限公司 Operation planning, partitioning, operation method, autonomous mobile device and cleaning robot

Similar Documents

Publication Publication Date Title
CN111596651B (en) Environmental area division and fixed-point cleaning method, equipment and storage medium
CN110575099B (en) Fixed-point cleaning method, floor sweeping robot and storage medium
US20210141382A1 (en) Method For Controlling An Autonomous Mobile Robot
US12089801B2 (en) Scheduling and control system for autonomous robots
US20210330166A1 (en) Method and apparatus for controlling mopping robot, and non-transitory computer-readable storage medium
US20200319640A1 (en) Method for navigation of a robot
KR101637906B1 (en) Methods, devices, program and recording medium for clearing garbage
US20210338034A1 (en) Method and apparatus for controlling mopping robot, and non-transitory computer-readable storage medium
US11116374B2 (en) Self-actuated cleaning head for an autonomous vacuum
EP3616853B1 (en) Mobile robot and control thereof for household
CN111035328A (en) Robot cleaning method and robot
KR102219801B1 (en) A moving-robot and control method thereof
US20160167234A1 (en) Mobile robot providing environmental mapping for household environmental control
CN105446332A (en) Automatic cleaning control method and device and electronic device
CN211022482U (en) Cleaning robot
CN106580193A (en) Intelligent floor sweeping method and device and floor sweeping robot
CN116509280A (en) Robot control method, robot, and storage medium
CN115469648A (en) Operation method, self-moving device and storage medium
CN111973097A (en) Sweeper control method and device, sweeper and computer readable storage medium
CN114158980A (en) Job method, job mode configuration method, device, and storage medium
CN113995355B (en) Robot management method, device, equipment and readable storage medium
CN117297449A (en) Cleaning setting method, cleaning apparatus, computer program product, and storage medium
CN111343696A (en) Communication method of self-moving equipment, self-moving equipment and storage medium
CN111367271A (en) Planning method, system and chip for cleaning path of robot
CN111338330B (en) Job position determination method, self-moving device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination