CN115413959A - Operation method and device based on cleaning robot, electronic equipment and medium - Google Patents

Operation method and device based on cleaning robot, electronic equipment and medium Download PDF

Info

Publication number
CN115413959A
CN115413959A CN202110515240.5A CN202110515240A CN115413959A CN 115413959 A CN115413959 A CN 115413959A CN 202110515240 A CN202110515240 A CN 202110515240A CN 115413959 A CN115413959 A CN 115413959A
Authority
CN
China
Prior art keywords
cleaning robot
area
cleaning
determining
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110515240.5A
Other languages
Chinese (zh)
Inventor
柯南海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Priority to CN202110515240.5A priority Critical patent/CN115413959A/en
Publication of CN115413959A publication Critical patent/CN115413959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Abstract

The application discloses an operation method and device based on a cleaning robot, electronic equipment and a medium. According to the method, a first area identifier where the cleaning robot is located at present is determined after an operation instruction is obtained; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning duration, cleaning force and cleaning mode of the cleaning robot; it is determined that the cleaning robot is operating in the first operating mode. By applying the technical scheme of the application, the pre-trained image detection classification model can be deployed in the cleaning robot, so that the current region to be cleaned is firstly identified, and the corresponding cleaning mode is selected to clean the region according to the difference of the regions to be cleaned. Thereby avoiding the problem that the cleaning mode can not be changed aiming at different indoor areas in the related technology.

Description

Operation method and device based on cleaning robot, electronic equipment and medium
Technical Field
The present application relates to data processing technologies, and in particular, to an operating method and apparatus based on a cleaning robot, an electronic device, and a medium.
Background
Due to the rise of the communications era and society, smart devices have been continuously developed with the use of more and more users.
Among them, with the rapid development of the communication era, it has become a normal state that people use a cleaning robot instead of manual cleaning. In addition, the cleaning force and the cleaning mode of the traditional cleaning robot aiming at different indoor areas are the same, so that the cleaning efficiency of the cleaning robot in cleaning a specific area is not high, and the user experience is further reduced.
Disclosure of Invention
The embodiment of the application provides an operation method, an operation device, electronic equipment and a medium based on a cleaning robot, and is used for solving the problem that cleaning modes cannot be changed aiming at different indoor areas in the related art.
According to an aspect of the embodiments of the present application, there is provided a cleaning robot-based operation method, applied to a cleaning robot, including:
acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently;
determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot;
determining that the cleaning robot is operating in the first operating mode.
Optionally, in another embodiment based on the above method of the present application, after the determining that the cleaning robot is operating in the first operating mode, the method further includes:
determining a second area identifier where the cleaning robot is located currently, and determining a second operation mode based on the second area identifier, wherein the second area is an area different from the first area;
determining that the cleaning robot is switched from the first operation mode to the second operation mode and operates in the second operation mode.
Optionally, in another embodiment of the method according to the present application, the determining a first area identifier where the cleaning robot is currently located includes:
identifying whether object features of a target area exist in an area where the cleaning robot is located currently by using an image detection classification model, wherein the object features of the target area include at least one of size features, color features and contour features;
and determining the existence of the target object characteristic, and determining the first area identification where the cleaning robot is located currently.
Optionally, in another embodiment based on the method of the present application, the determining the first area identifier where the cleaning robot is currently located includes:
identifying the coordinates of the area where the cleaning robot is currently located by using a positioning module;
and determining a first area identification in which the cleaning robot is currently positioned based on the coordinates of the area in which the cleaning robot is currently positioned.
Optionally, in another embodiment of the foregoing method based on the present application, before the obtaining the operation instruction, the method further includes:
acquiring a map construction instruction, wherein the map construction instruction comprises at least two areas to be cleaned;
determining a shooting scheme corresponding to the area to be cleaned according to a construction requirement corresponding to the map construction instruction, wherein different shooting schemes correspond to different shooting heights and/or different shooting angles;
and based on the difference of the areas to be cleaned, adopting a corresponding shooting scheme to construct a map.
Optionally, in another embodiment of the foregoing method based on the present application, before the obtaining the operation instruction, the method further includes:
acquiring at least two sample images, wherein the sample images comprise at least one regional object feature;
marking corresponding area identifications for the sample images on the basis of the area objects;
and training a preset image semantic segmentation model by using the sample image marked with the area identifier and the area object characteristics included in the sample image to obtain a first image detection classification model meeting preset conditions, wherein the first image detection classification model is used for determining the area identifier where the cleaning robot is located at present.
Optionally, in another embodiment based on the foregoing method of the present application, after obtaining the first image detection classification model that satisfies the preset condition, the method further includes:
performing model compression on the first image detection classification model to obtain a second image detection classification model;
deploying the first image detection classification model to a server side, and deploying the second image detection classification model to the cleaning robot;
after the operation instruction is obtained, determining an identification mode based on the operation state of the cleaning robot, wherein the identification mode corresponds to identification by using the first image detection classification model or identification by using the second image detection classification model;
based on the recognition pattern, a first zone identity in which the cleaning robot is currently located is determined.
According to another aspect of the embodiments of the present application, there is provided a running device based on a cleaning robot, applied to the cleaning robot, including:
the acquisition module is used for acquiring an operation instruction and determining a first area identifier where the cleaning robot is located currently;
a determination module configured to determine a first operation mode based on the first zone identifier, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot;
an operation module configured to determine that the cleaning robot is operating in the first operation mode.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the above-described cleaning robot based operation methods.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the above-mentioned cleaning robot-based operation methods.
In the method, when an operation instruction is obtained, a first area identifier where the cleaning robot is located at present is determined; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot; it is determined that the cleaning robot is operating in the first operating mode. By applying the technical scheme of the application, the pre-trained image detection classification model can be deployed in the cleaning robot, so that the current region to be cleaned is firstly identified, and the corresponding cleaning mode is selected to clean the region according to the difference of the regions to be cleaned. Thereby avoiding the problem that the cleaning mode can not be changed aiming at different indoor areas in the related technology.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of an operating system architecture based on a cleaning robot according to the present application;
FIG. 2 is a schematic view of a cleaning robot based operation method proposed in the present application;
fig. 3 is a display diagram of a cleaning robot capturing image proposed in the present application;
fig. 4 is a flowchart of the operation of the cleaning robot proposed in the present application;
FIG. 5 is a schematic structural diagram of an operating device of the cleaning robot-based cleaning robot according to the present application;
fig. 6 is a schematic structural diagram of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front, and rear … …) in the present embodiment are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
An operation method for performing a cleaning robot based according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 4. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which the cleaning robot-based operation method or the cleaning robot-based operation of the embodiments of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of the cleaning robots 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium of communication links between the cleaning robots 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of cleaning robots, networks, and servers in fig. 1 is merely illustrative. There may be any number of cleaning robots, networks, and servers, as desired for the implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the cleaning robots 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages, capture images, etc. The cleaning robots 101, 102, 103 may include various electronic devices having a display screen, a camera acquisition device.
The cleaning robots 101, 102, 103 in the present application may be cleaning robots that provide various services. For example, the user implements by means of the cleaning robot 103 (which may also be the cleaning robot 101 or 102): acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot; determining that the cleaning robot is operating in the first operating mode.
It should be noted that the cleaning robot-based operation method provided in the embodiments of the present application may be executed by one or more of the cleaning robots 101, 102, and 103, and/or the server 105, and accordingly, the cleaning robot-based operation device provided in the embodiments of the present application is generally disposed in the corresponding cleaning robot, and/or the server 105, but the present application is not limited thereto.
Furthermore, the application also provides an operation method and device based on the cleaning robot, a target terminal and a medium.
Fig. 2 schematically shows a flow diagram of a method for operating a cleaning robot according to an embodiment of the present application. As shown in fig. 2, the method is applied to a cleaning robot, including:
s101, acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently.
The cleaning robot is one of intelligent household appliances, and can automatically complete floor cleaning work in an area by means of certain artificial intelligence. Generally, the floor cleaning machine adopts a brushing and vacuum mode, and firstly absorbs the impurities on the floor into the garbage storage box, so that the function of cleaning the floor is achieved.
It should be noted that the operation instruction in the present application may be generated by a user, or may be generated according to a preset rule. For example, at intervals, the robot may be instructed to perform operating instructions for cleaning the room, etc.
In addition, the first area indicator is not limited in the present application. For example, it may correspond to a bedroom, to a kitchen, to an office area, etc.
The number of the first area marks is not limited in the present application, and may be, for example, one or more.
S102, determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning duration, cleaning force and cleaning mode of the cleaning robot.
Further, after the starting instruction is obtained, the corresponding operation mode needs to be determined first, wherein the operation mode can be divided into different cleaning time lengths, cleaning force and cleaning modes.
Wherein, the cleaning force can correspond to using different cleaning auxiliary devices, for example, can include using the detergent of cooking fume, the detergent of dust removal, the freshener of haze removal etc.. For the cleaning mode, a high-power cleaning mode and a low-power cleaning mode can be selected.
For example, as shown in fig. 3, for the first area indication corresponding to the kitchen, generally, since the degree of oil smoke of the kitchen is high, in order to ensure the cleaning effect. The mode of the cleaning agent with high cleaning power or oil smoke removal can be selected for operation. In addition, since the area of the kitchen is usually not large, a cleaning mode with a short cleaning time can be selected. Or, for the first area identifier corresponding to the office area, the cleaning mode with a longer cleaning time can be selected for cleaning because the area of the office area is generally larger.
Further alternatively, for the first area identification corresponding to a bedroom, the requirement for a comfortable and quiet type is high, typically because the bedroom is the area for the user to rest and play with, and therefore in order to guarantee the user experience. I.e. a cleaning mode with low noise (lower power) can be selected for cleaning. Likewise, for the first zone identification corresponding to a toilet, it is generally desirable to ensure the user experience since the toilet is less demanding of being comfortable and quiet. I.e. cleaning can be performed without selecting a low-noise (higher power) cleaning method.
And S103, determining that the cleaning robot operates in the first operation mode in the first area.
Further, after the first operation mode used for representing the cleaning duration, the cleaning force and the cleaning mode of the cleaning robot is determined, the cleaning robot can be driven to operate in the first operation mode until the cleaning robot cleans the first area completely.
It can be understood that after it is detected that the cleaning robot has completely cleaned the first area, the operation modes corresponding to the other areas can be selected for cleaning according to the identifications of the other areas.
In the method, when an operation instruction is obtained, a first area identifier where the cleaning robot is located at present is determined; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning duration, cleaning force and cleaning mode of the cleaning robot; it is determined that the cleaning robot is operating in the first operating mode. By applying the technical scheme of the application, the pre-trained image detection classification model can be deployed in the cleaning robot, so that the current region to be cleaned is firstly identified, and the corresponding cleaning mode is selected to clean the region according to the difference of the regions to be cleaned. Thereby avoiding the problem that the cleaning mode can not be changed aiming at different indoor areas in the related technology.
Optionally, in a possible embodiment of the present application, at S102 (after determining that the cleaning robot operates in the first operation mode), the method further includes:
determining a second area identifier where the cleaning robot is located currently, and determining a second operation mode based on the second area identifier, wherein the second area is an area different from the first area;
and determining that the cleaning robot is switched from the first operation mode to the second operation mode and operates in the second operation mode.
Furthermore, when the cleaning robot is detected to have cleaned the current area, a second operation mode corresponding to the second area can be selected for cleaning according to the identification of the second area. It should be noted that the first region and the second region should be different regions. For example, the first area may be a kitchen, the second area may be a bedroom, etc. Alternatively, the first area may be a toilet and the second area may be a kitchen.
It should be noted that the second operation mode may be different from the first operation mode, or may be the same, for example, when the cleaning robot has cleaned the first area (bedroom a), it comes to the second area (bedroom B). The second operating mode may in this case be the same operating mode as the first operating mode.
For example, for a first area identification corresponding to a kitchen, the first operating mode may be a low cleaning duration operating mode. Further, after the cleaning robot finishes cleaning the kitchen, the robot can enter an office area (a second area). It will be appreciated that for the second zone indication corresponding to an office zone, a cleaning mode with a longer cleaning duration may also be selected for cleaning, typically due to the larger area of the office zone. The cleaning robot can thus determine a second operating mode for a long cleaning time and perform a sweeping operation in the second area.
It should be noted that, in the present application, after the cleaning robot itself determines that the condition is met, the first operation mode may be switched to the second operation mode and the cleaning robot may operate. Or after the cleaning robot transmits the current environment information to the server, the server sends an instruction for switching from the first operation mode to the second operation mode to the cleaning robot after judging that the condition is met.
Optionally, in a possible embodiment of the present application, at S101 (determining a first area identifier where the cleaning robot is currently located), the method further includes:
identifying whether object features of a target area exist in the current area where the cleaning robot is located by utilizing an image detection classification model, wherein the object features of the target area include at least one of size features, color features and contour features;
and determining the existence of the target object characteristics, and determining the first area identification where the cleaning robot is located currently.
Furthermore, the cleaning robot can shoot a plurality of images of the current region, and the preset image detection classification model is used for carrying out feature recognition on the plurality of images to be recognized, so that whether the object features of the target region exist in the current region of the cleaning robot or not can be judged.
For example, when the target area object is a range hood, the current area can be determined to be a kitchen. And when the object in the target area is a bed, the current area can be determined to be a bedroom.
The image detection classification model is not specifically limited in the application. For example, a Convolutional Neural Network (CNN). Convolutional Neural Networks are a class of feed-forward Neural Networks (fed-forward Neural Networks) containing convolutional calculations and having a deep structure, and are one of the representative algorithms for deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. Due to the powerful feature characterization capability of the CNN (convolutional neural network) on the image, the CNN has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like.
Furthermore, the image detection classification model can be used for detecting the characteristic information in a plurality of images to be recognized, which are collected by the camera device, so that the characteristic information is subjected to characteristic recognition, and whether the images to be recognized contain the target object or not is determined. The image to be recognized needs to be input into a preset convolutional neural network model, and the output of a last full connected layer (FC) of the convolutional neural network model is used as a recognition result of the feature data corresponding to the image to be recognized.
Optionally, in a possible embodiment of the present application, at S101 (determining a first area identifier where the cleaning robot is currently located), the method further includes:
identifying the current area coordinate of the cleaning robot by using a positioning module;
and determining a first area identification in which the cleaning robot is currently located based on the area coordinates in which the cleaning robot is currently located.
Furthermore, in the mode of determining the area identifier where the cleaning robot is located, the current position coordinate of the robot can be determined through the positioning module, and the area identifier where the cleaning robot is located is judged according to the position coordinate.
In one mode, in the process of determining the area identifier by using the positioning module, the image detection classification model can be used in combination to identify whether the object feature of the target area exists in the current area of the cleaning robot. Therefore, the purpose of improving the accuracy of judging the area identification is achieved.
Optionally, before the obtaining of the operation instruction, the method further includes:
acquiring a map construction instruction, wherein the map construction instruction comprises at least two areas to be cleaned;
determining shooting schemes corresponding to the areas to be cleaned according to construction requirements corresponding to the map construction instructions, wherein different shooting schemes correspond to different shooting heights and/or shooting visual angles;
and based on the difference of the areas to be cleaned, adopting a corresponding shooting scheme to construct a map.
Further, in the process of constructing an indoor cleaning map by using the cleaning robot in the initial stage, different shooting heights and/or shooting angles can be selected to shoot different areas.
It will be appreciated that for example for kitchen areas the most prominent objects are the range hood and cooktop, while objects such as range hoods are usually located at a high level, which also results in the cleaning robot being located on the ground not being well imaged. Therefore, according to the construction requirements issued by the user, when a certain area to be cleaned is determined to be a kitchen area, a shooting scheme can be generated by selecting a higher shooting height and a wider shooting angle. Therefore, the map of the kitchen area is constructed by the shooting scheme with the higher shooting height and the wider shooting visual angle in a targeted manner.
Or, for the bedroom area, the most obvious object feature is the bed frame, and objects such as the bed frame are usually located at a relatively horizontal position, so that the lower shooting height can be selected to generate the shooting scheme when a certain area to be cleaned is determined to be the bedroom area in the construction requirements issued by the user. So that the mapping of the bedroom area is carried out in a targeted manner by using the shooting scheme composed of the lower shooting height.
Optionally, in a possible embodiment of the present application, before S101 (obtaining the execution instruction), the following steps may be included:
acquiring at least two sample images, wherein the sample images comprise at least one regional object feature;
labeling corresponding area identifications for each sample image based on the area objects;
and training a preset image semantic segmentation model by using the sample image marked with the area identifier and the area object characteristics included in the sample image to obtain a first image detection classification model meeting preset conditions, wherein the first image detection classification model is used for determining the area identifier where the cleaning robot is located at present.
Further, before the preset image detection classification model is used for collecting images of all the regions, the image detection classification model needs to be trained. Specifically, a number of sample images including at least two regional object features are first acquired. And training a basic blank image semantic segmentation model by using the plurality of sample images so as to obtain a first image detection classification model meeting preset conditions.
It should be noted that, in the present application, corresponding region identifiers need to be labeled for each sample image according to the region object. For example, when a regional object of the range hood appears in the sample image, the sample image needs to be labeled with a regional identifier of the kitchen. Or when the regional objects of the bed frame appear in the sample image, the regional identification of the bedroom needs to be marked on the sample image.
The sample characteristics (for example, size characteristics, outline characteristics, color characteristics and the like) of at least one region object included in the sample image can be identified through a preset image semantic segmentation model. Furthermore, the image semantic segmentation model may further classify object features of each region in the sample image, and classify sample features belonging to the same class into objects of the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
It should be noted that, when the neural network image classification model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the image detection classification model is repeatedly trained by the multiple sample images, and when the classification accuracy of the neural network image classification model on the pixel points reaches more than 70%, then the image detection classification model can be applied to the embodiment of the application to perform image feature recognition on multiple images to be recognized, which are shot by a camera device in the cleaning robot.
Further optionally, after obtaining the first image detection classification model meeting the preset condition, the method further includes:
performing model compression on the first image detection classification model to obtain a second image detection classification model;
deploying the first image detection classification model to a server side, and deploying the second image detection classification model to the cleaning robot;
after the operation instruction is obtained, determining an identification mode based on the operation state of the cleaning robot, wherein the identification mode corresponds to identification by using a first image detection classification model or identification by using a second image detection classification model;
based on the recognition pattern, a first zone identity is determined in which the cleaning robot is currently located.
Further, after the first image detection classification model is obtained, the defect that a robot needs to occupy a large memory due to the fact that the data structure of the first image detection classification model is too large is avoided. The method and the device can also perform model compression on the image to obtain a second image detection classification model with a smaller corresponding data architecture.
Optionally, the mode of compressing the first image detection classification model by the present application may be a method of directly compressing the first image detection classification model, and may include two aspects of sparsification of a model kernel and clipping of the model, for example. The thinning of the kernel needs the support of some sparse computation libraries, and the acceleration effect may be limited by many factors such as bandwidth and sparsity. In addition, the clipping method of the model needs to directly remove the unimportant filter parameters from the original model. Because the self-adaptive capacity of the neural network is very strong, and the model with a large data architecture is often redundant, after some parameters are removed, the performance reduced by the parameter removal can be recovered through a retraining means, so that the model can be effectively compressed to a great extent on the basis of the existing model only by selecting a proper cutting means and a retraining means, and the method is the most common method used at present.
Furthermore, the second image detection classification model with smaller data structure can be deployed on the cleaning robot after being obtained. Therefore, the cleaning robot can subsequently identify a plurality of images to be identified collected by the camera device by utilizing the compressed image detection classification model, and then determine the corresponding area identification.
In addition, the first image detection classification model with a large data structure can be deployed in the server, so that the identification mode is determined based on the running state of the cleaning robot. And then, the corresponding image detection classification model is selected in a targeted manner to determine the corresponding region identification.
Further, as shown in fig. 4, which is an overall flowchart of the operation method of the cleaning robot provided in the present application, the operation instruction is obtained, and the first area identifier where the cleaning robot is currently located is determined; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning duration, cleaning force and cleaning mode of the cleaning robot; it is determined that the cleaning robot is operating in the first operating mode. By applying the technical scheme of the application, the pre-trained image detection classification model can be deployed in the cleaning robot, so that the current region to be cleaned is firstly identified, and the corresponding cleaning mode is selected to clean the region according to the difference of the regions to be cleaned. Thereby avoiding the problem that the cleaning mode can not be changed aiming at different indoor areas in the related technology.
In another embodiment of the present application, as shown in fig. 5, the present application further provides a running device based on a cleaning robot. Wherein, the device comprises an acquisition module 201, a determination module 202 and an operation module 203, wherein,
the acquisition module is used for acquiring an operation instruction and determining a first area identifier where the cleaning robot is located currently;
a determination module configured to determine a first operation mode based on the first zone identifier, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot;
an operation module configured to determine that the cleaning robot is operating in the first operation mode.
In the method, when an operation instruction is obtained, a first area identifier where the cleaning robot is located at present is determined; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot; it is determined that the cleaning robot is operating in the first operating mode. By applying the technical scheme of the application, the pre-trained image detection classification model can be deployed in the cleaning robot, so that the current region to be cleaned is firstly identified, and the corresponding cleaning mode is selected to clean the region according to the difference of the regions to be cleaned. Thereby avoiding the problem that the cleaning mode can not be changed aiming at different indoor areas in the related technology.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to determine a second area identifier where the cleaning robot is currently located, and determine a second operation mode based on the second area identifier, where the second area is different from the first area;
an obtaining module 201 configured to determine that the cleaning robot is switched from the first operation mode to the second operation mode and operates in the second operation mode.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to identify, by using an image detection classification model, whether there is a target area object feature in an area where the cleaning robot is currently located, where the area object feature includes at least one of a size feature, a color feature, and a contour feature;
an obtaining module 201 configured to determine that the target object feature exists, and determine a first area identifier where the cleaning robot is currently located.
In another embodiment of the present application, the obtaining module 201 further includes:
an acquisition module 201 configured to identify, using a positioning module, coordinates of an area where the cleaning robot is currently located;
an obtaining module 201 configured to determine a first area identifier where the cleaning robot is currently located based on the area coordinates where the cleaning robot is currently located.
In another embodiment of the present application, the obtaining module 201 further includes:
the cleaning system comprises an acquisition module 201, a cleaning module and a cleaning module, wherein the acquisition module is configured to acquire a map construction instruction, and the map construction instruction comprises at least two areas to be cleaned;
an obtaining module 201, configured to determine a shooting scheme corresponding to the area to be cleaned according to a construction requirement corresponding to the map construction instruction, where different shooting schemes correspond to different shooting heights and/or shooting angles;
an obtaining module 201 configured to adopt a corresponding shooting scheme for map construction based on the difference of the areas to be cleaned.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201 configured to obtain at least two sample images, wherein the sample images include at least one regional object feature;
an obtaining module 201, configured to label, based on the region object, a corresponding region identifier for each sample image;
an obtaining module 201, configured to train a preset image semantic segmentation model by using the sample image labeled with the area identifier and the area object features included in the sample image, to obtain a first image detection classification model meeting a preset condition, where the first image detection classification model is used to determine the area identifier where the cleaning robot is currently located.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201, configured to perform model compression on the first image detection classification model to obtain a second image detection classification model;
an acquisition module 201 configured to deploy the first image detection classification model to a server side and the second image detection classification model to the cleaning robot;
an obtaining module 201 configured to determine, after obtaining the operation instruction, an identification mode based on an operation state of the cleaning robot, where the identification mode corresponds to identification by using the first image detection classification model or identification by using the second image detection classification model;
an acquisition module 201 configured to determine a first zone identity in which the cleaning robot is currently located based on the recognition pattern.
FIG. 6 is a block diagram illustrating a logical configuration of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, electronic device 300 may include one or more of the following components: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 302 is configured to store at least one instruction for execution by the processor 301 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 300 may further include: a peripheral interface 303 and at least one peripheral. The processor 301, memory 302 and peripheral interface 303 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 303 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, touch display screen 305, camera 306, audio circuitry 307, positioning components 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, providing the front panel of the electronic device 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the electronic device 300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic Location of the electronic device 300 to implement navigation or LBS (Location Based Service). The Positioning component 308 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 309 is used to supply power to various components in the electronic device 300. The power source 309 may be alternating current, direct current, disposable or rechargeable. When the power source 309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 300 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 300. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the touch screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the electronic device 300, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the user on the electronic device 300. From the data collected by the gyro sensor 412, the processor 301 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the electronic device 300 and/or an underlying layer of the touch display screen 305. When the pressure sensor 413 is arranged on the side frame of the electronic device 300, a holding signal of the user to the electronic device 300 can be detected, and the processor 301 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 305, the processor 301 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the electronic device 300. When a physical button or vendor Logo is provided on the electronic device 300, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, processor 301 may control the display brightness of touch display screen 305 based on the ambient light intensity collected by optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 305 is turned down. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 415.
The proximity sensor 416, also called a distance sensor, is typically disposed on the front panel of the electronic device 300. The proximity sensor 416 is used to capture the distance between the user and the front of the electronic device 300. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 300 gradually decreases, the processor 301 controls the touch display screen 305 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 300 gradually becomes larger, the processor 301 controls the touch display screen 305 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not limiting to electronic device 300, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 304, comprising instructions executable by the processor 420 of the electronic device 300 to perform the above-described cleaning robot-based operation method, the method comprising: acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot; determining that the cleaning robot is operating in the first operating mode in the first zone. Optionally, the instructions may also be executed by the processor 420 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 420 of the electronic device 300 to perform the above-described cleaning robot based operation method, the method comprising: acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently; determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot; determining that the cleaning robot is operating in the first operating mode in the first zone. Optionally, the instructions may also be executable by the processor 420 of the electronic device 300 to perform other steps involved in the exemplary embodiments described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An operation method based on a cleaning robot is characterized by being applied to the cleaning robot and comprising the following steps:
acquiring an operation instruction, and determining a first area identifier where the cleaning robot is located currently;
determining a first operation mode based on the first area identification, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot;
determining that the cleaning robot is operating in the first operating mode in the first zone.
2. The method of claim 1, after the determining that the cleaning robot is operating in the first mode of operation, further comprising:
determining a second area identifier where the cleaning robot is located currently, and determining a second operation mode based on the second area identifier, wherein the second area is an area different from the first area;
determining that the cleaning robot is switched from the first operation mode to the second operation mode and operates in the second operation mode.
3. The method of claim 1, wherein determining the first zone identification in which the cleaning robot is currently located comprises:
identifying whether target area object features exist in an area where the cleaning robot is located currently by using an image detection classification model, wherein the area object features comprise at least one of size features, color features and contour features;
and determining the existence of the target object characteristic, and determining the first area identification where the cleaning robot is located currently.
4. The method of claim 1 or 3, wherein determining the first zone identification in which the cleaning robot is currently located comprises:
identifying the coordinates of the area where the cleaning robot is currently located by using a positioning module;
and determining a first area identification in which the cleaning robot is currently located based on the area coordinates in which the cleaning robot is currently located.
5. The method of claim 1, prior to said fetching the run instruction, further comprising:
acquiring a map building instruction, wherein the map building instruction comprises at least two areas to be cleaned;
determining a shooting scheme corresponding to the area to be cleaned according to a construction requirement corresponding to the map construction instruction, wherein different shooting schemes correspond to different shooting heights and/or different shooting angles;
and based on the difference of the areas to be cleaned, adopting a corresponding shooting scheme to construct a map.
6. The method of claim 1, prior to said fetching the run instruction, further comprising:
acquiring at least two sample images, wherein the sample images comprise at least one regional object feature;
marking corresponding area identifications for the sample images on the basis of the area objects;
and training a preset image semantic segmentation model by using the sample image marked with the area identifier and the area object characteristics included in the sample image to obtain a first image detection classification model meeting preset conditions, wherein the first image detection classification model is used for determining the area identifier where the cleaning robot is located at present.
7. The method as claimed in claim 6, wherein after obtaining the first image detection classification model satisfying the preset condition, the method further comprises:
performing model compression on the first image detection classification model to obtain a second image detection classification model;
deploying the first image detection classification model to a server side, and deploying the second image detection classification model to the cleaning robot;
after the operation instruction is obtained, determining an identification mode based on the operation state of the cleaning robot, wherein the identification mode corresponds to identification by using the first image detection classification model or identification by using the second image detection classification model;
based on the recognition pattern, a first zone identity in which the cleaning robot is currently located is determined.
8. A running device based on a cleaning robot is characterized by being applied to the cleaning robot and comprising:
the acquisition module is set to acquire an operation instruction and determine a first area identifier where the cleaning robot is located currently;
a determination module configured to determine a first operation mode based on the first zone identifier, wherein the operation mode is used for representing at least one of cleaning time length, cleaning force and cleaning mode of the cleaning robot;
an operation module configured to determine that the cleaning robot is operating in the first operation mode.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the cleaning robot based operation method of any one of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions for performing the operations of the cleaning robot based operation method of any one of claims 1 to 7 when the instructions are executed.
CN202110515240.5A 2021-05-12 2021-05-12 Operation method and device based on cleaning robot, electronic equipment and medium Pending CN115413959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110515240.5A CN115413959A (en) 2021-05-12 2021-05-12 Operation method and device based on cleaning robot, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110515240.5A CN115413959A (en) 2021-05-12 2021-05-12 Operation method and device based on cleaning robot, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115413959A true CN115413959A (en) 2022-12-02

Family

ID=84230477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110515240.5A Pending CN115413959A (en) 2021-05-12 2021-05-12 Operation method and device based on cleaning robot, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115413959A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007065034A1 (en) * 2005-12-02 2007-06-07 Irobot Corporation Modular robot
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
US20180348783A1 (en) * 2017-05-31 2018-12-06 Neato Robotics, Inc. Asynchronous image classification
US10293489B1 (en) * 2017-12-15 2019-05-21 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and system, and cleaning robot using the same
CN110200549A (en) * 2019-04-22 2019-09-06 深圳飞科机器人有限公司 Clean robot control method and Related product
WO2020077850A1 (en) * 2018-10-18 2020-04-23 深圳乐动机器人有限公司 Method and apparatus for dividing and identifying indoor region, and terminal device
WO2020141924A1 (en) * 2019-01-04 2020-07-09 Samsung Electronics Co., Ltd. Apparatus and method of generating map data of cleaning space
CN111539399A (en) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 Control method and device of self-moving equipment, storage medium and self-moving equipment
CN111568314A (en) * 2020-05-26 2020-08-25 深圳市杉川机器人有限公司 Cleaning method and device based on scene recognition, cleaning robot and storage medium
CN111643010A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium
CN111784819A (en) * 2020-06-17 2020-10-16 科沃斯机器人股份有限公司 Multi-floor map splicing method and system and self-moving robot
WO2021008339A1 (en) * 2019-07-16 2021-01-21 深圳市杉川机器人有限公司 Robot, robot-based cleaning method, and computer readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007065034A1 (en) * 2005-12-02 2007-06-07 Irobot Corporation Modular robot
US20180348783A1 (en) * 2017-05-31 2018-12-06 Neato Robotics, Inc. Asynchronous image classification
US10293489B1 (en) * 2017-12-15 2019-05-21 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and system, and cleaning robot using the same
CN108154098A (en) * 2017-12-20 2018-06-12 歌尔股份有限公司 A kind of target identification method of robot, device and robot
WO2020077850A1 (en) * 2018-10-18 2020-04-23 深圳乐动机器人有限公司 Method and apparatus for dividing and identifying indoor region, and terminal device
WO2020141924A1 (en) * 2019-01-04 2020-07-09 Samsung Electronics Co., Ltd. Apparatus and method of generating map data of cleaning space
US20200218274A1 (en) * 2019-01-04 2020-07-09 Samsung Electronics Co., Ltd. Apparatus and method of generating map data of cleaning space
CN110200549A (en) * 2019-04-22 2019-09-06 深圳飞科机器人有限公司 Clean robot control method and Related product
WO2021008339A1 (en) * 2019-07-16 2021-01-21 深圳市杉川机器人有限公司 Robot, robot-based cleaning method, and computer readable storage medium
CN111568314A (en) * 2020-05-26 2020-08-25 深圳市杉川机器人有限公司 Cleaning method and device based on scene recognition, cleaning robot and storage medium
CN111643010A (en) * 2020-05-26 2020-09-11 深圳市杉川机器人有限公司 Cleaning robot control method and device, cleaning robot and storage medium
CN111784819A (en) * 2020-06-17 2020-10-16 科沃斯机器人股份有限公司 Multi-floor map splicing method and system and self-moving robot
CN111539399A (en) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 Control method and device of self-moving equipment, storage medium and self-moving equipment

Similar Documents

Publication Publication Date Title
CN107592459A (en) A kind of photographic method and mobile terminal
CN108495029A (en) A kind of photographic method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN106371086B (en) A kind of method and apparatus of ranging
CN107864336B (en) A kind of image processing method, mobile terminal
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN109474786A (en) A kind of preview image generation method and terminal
CN109669747A (en) A kind of method and mobile terminal of moving icon
CN114391777A (en) Obstacle avoidance method and apparatus for cleaning robot, electronic device, and medium
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN110650367A (en) Video processing method, electronic device, and medium
CN109445653A (en) A kind of icon processing method and mobile terminal
CN108616687A (en) A kind of photographic method, device and mobile terminal
CN108174109A (en) A kind of photographic method and mobile terminal
CN111464746A (en) Photographing method and electronic equipment
CN110675473A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN108156386B (en) Panoramic photographing method and mobile terminal
CN111857793A (en) Network model training method, device, equipment and storage medium
CN108510266A (en) A kind of Digital Object Unique Identifier recognition methods and mobile terminal
CN112819103A (en) Feature recognition method and device based on graph neural network, storage medium and terminal
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN112860046A (en) Method, apparatus, electronic device and medium for selecting operation mode
CN110853124A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN110930372A (en) Image processing method, electronic equipment and computer readable storage medium
CN114498827A (en) Operation method and device of cleaning robot, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination