CN111568314B - Cleaning method and device based on scene recognition, cleaning robot and storage medium - Google Patents

Cleaning method and device based on scene recognition, cleaning robot and storage medium Download PDF

Info

Publication number
CN111568314B
CN111568314B CN202010455863.3A CN202010455863A CN111568314B CN 111568314 B CN111568314 B CN 111568314B CN 202010455863 A CN202010455863 A CN 202010455863A CN 111568314 B CN111568314 B CN 111568314B
Authority
CN
China
Prior art keywords
area
target
type
cleaning
cleaned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010455863.3A
Other languages
Chinese (zh)
Other versions
CN111568314A (en
Inventor
杨勇
吴泽晓
陈文辉
张康健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3irobotix Co Ltd
Original Assignee
Shenzhen 3irobotix Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3irobotix Co Ltd filed Critical Shenzhen 3irobotix Co Ltd
Priority to CN202010455863.3A priority Critical patent/CN111568314B/en
Publication of CN111568314A publication Critical patent/CN111568314A/en
Application granted granted Critical
Publication of CN111568314B publication Critical patent/CN111568314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • A47L11/4008Arrangements of switches, indicators or the like
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application relates to a cleaning method and device based on scene recognition, a cleaning robot and a storage medium. The method comprises the following steps: acquiring an environment image corresponding to an area to be cleaned; performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises a target area; calling a scene recognition model, and carrying out scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned; determining the region type corresponding to the target region according to the region scene and the detection result; and adjusting the cleaning strategy according to the area type, and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy. By adopting the method, the cleaning efficiency of the cleaning robot can be effectively improved.

Description

Cleaning method and device based on scene recognition, cleaning robot and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a cleaning method and apparatus, a cleaning robot, and a storage medium based on scene recognition.
Background
With the development of science and technology, the cleaning robot as an intelligent household appliance can automatically perform ground cleaning work on an area needing cleaning in a room, and reduces the cleaning labor intensity of a corresponding user. In a conventional manner, when a cleaning robot performs cleaning, the cleaning robot performs cleaning according to a preset cleaning route.
However, in practical use, there may be a dirty area that is relatively difficult to clean in an area that is cleaned by the cleaning robot. In order to ensure that the whole area is cleaned, in a traditional mode, a cleaning robot can only clean the whole area for multiple times, and a dirty area cannot be accurately detected for targeted cleaning, so that the cleaning efficiency of the cleaning robot is low.
Disclosure of Invention
In view of the above, it is necessary to provide a scene recognition-based cleaning method, apparatus, cleaning robot, and storage medium capable of improving the cleaning efficiency of the cleaning robot.
A method of cleaning based on scene recognition, the method comprising:
acquiring an environment image corresponding to an area to be cleaned;
performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises a target area;
calling a scene recognition model, and carrying out scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned;
determining the region type corresponding to the target region according to the region scene and the detection result;
and adjusting the cleaning strategy according to the area type, and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
In one embodiment, the calling a scene recognition model, and performing scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned includes:
generating an object group matrix according to the environment object;
calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix;
and carrying out scene recognition according to the object group characteristics to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
In one embodiment, the determining, according to the area scene and the detection result, the area type corresponding to the target area includes:
acquiring a plurality of region type weights corresponding to the region scenes;
obtaining a plurality of detection area types corresponding to the target area in a detection result, and performing weighting processing on the detection area types according to the area type weight to obtain area type scores;
and determining a target area type from the detection area types according to the area type scores.
In one embodiment, the method further comprises:
acquiring user schedule information corresponding to a user identifier in a preset time period;
predicting according to the user schedule information corresponding to the user identification to obtain a periodic behavior type corresponding to the user identification;
and adjusting a corresponding time sub-strategy of the cleaning robot according to the periodic behavior type.
In one embodiment, the method further comprises:
when the detection result further comprises a target object, acquiring contour information corresponding to the target object;
determining an object type corresponding to the target object according to the detection result and the contour information;
controlling the cleaning robot to move the target object to a target position corresponding to the object type when the object type belongs to the target type.
In one embodiment, the method further comprises:
when the detection result further comprises a dynamic object, determining an object position corresponding to the dynamic object according to the environment object;
determining behavior track information corresponding to the dynamic object according to the object position;
and generating an object moving area according to the behavior track information, and controlling the cleaning robot to execute corresponding cleaning operation on the object moving area.
A scene recognition based cleaning device, the device comprising:
the image detection module is used for acquiring an environment image corresponding to an area to be cleaned; performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises a target area;
the scene recognition module is used for calling a scene recognition model, carrying out scene recognition on the area to be cleaned according to the environment object, and obtaining an area scene corresponding to the area to be cleaned;
the type determining module is used for determining the area type corresponding to the target area according to the area scene and the detection result;
and the area cleaning module is used for adjusting the cleaning strategy according to the area type and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
In one embodiment, the scene recognition module is further configured to generate an object group matrix from the environment objects; calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix; and carrying out scene recognition according to the object group characteristics to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
A cleaning robot comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the cleaning method based on scene recognition when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned scene recognition based cleaning method.
According to the cleaning method and device based on scene recognition, the cleaning robot and the storage medium, the target detection is performed on the environment image by acquiring the environment image corresponding to the area to be cleaned, so that whether the area to be cleaned comprises the target area is detected. When the detection result shows that the area to be cleaned comprises the target area, acquiring an environment object corresponding to the area to be cleaned, calling a scene recognition model, carrying out scene recognition on the area to be cleaned according to the environment object, obtaining an area scene corresponding to the area to be cleaned and output by the scene recognition model, and determining the area type corresponding to the target area according to the area scene and the detection result. The area type corresponding to the target area is determined based on scene recognition and image detection, and the accuracy of target area detection is improved. The cleaning strategy is adjusted according to the area type, and corresponding cleaning operation is executed on the target area according to the adjusted cleaning strategy, so that the target area is cleaned in a targeted manner according to the area type, and the cleaning efficiency of the cleaning robot is effectively improved.
Drawings
FIG. 1 is a diagram of an application environment of a cleaning method based on scene recognition in one embodiment;
FIG. 2 is a schematic flow diagram of a scene recognition based cleaning method in one embodiment;
FIG. 3 is a flowchart illustrating steps of calling a scene recognition model, and performing scene recognition on an area to be cleaned according to an environment object to obtain an area scene corresponding to the area to be cleaned in one embodiment;
FIG. 4 is a schematic flow chart of a cleaning method based on scene recognition in another embodiment;
FIG. 5 is a schematic flow chart of a cleaning method based on scene recognition in yet another embodiment;
FIG. 6 is a block diagram of a cleaning device based on scene recognition in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The cleaning method based on scene recognition provided by the application can be applied to the cleaning robot shown in FIG. 1. As shown in fig. 1, the cleaning robot 100 may include, but is not limited to, a sensor 102, a controller 104, and an actuator assembly 106. The controller 104 may control the cleaning robot 100 to perform a corresponding cleaning operation according to the scene recognition-based cleaning method. During the moving or cleaning process of the cleaning robot 100, an environment image corresponding to an area to be cleaned is collected by the sensor 102. The cleaning robot 100 performs target detection on the environment image through the controller 104, and acquires an environment object corresponding to the region to be cleaned when the detection result indicates that the region to be cleaned includes the target region. The cleaning robot 100 calls the scene recognition model, performs scene recognition on the region to be cleaned according to the environment object to obtain a region scene corresponding to the region to be cleaned, and determines the region type corresponding to the target region according to the region scene and the detection result. The cleaning robot 100 adjusts the cleaning strategy according to the area type, and controls the executing component 106 to execute the corresponding cleaning operation on the target area according to the adjusted cleaning strategy. The sensor 102 is a sensing device disposed in the cleaning robot, and may specifically include, but is not limited to, a vision sensor, a laser sensor, an ultrasonic sensor, a video camera, a depth camera, and the like. The controller 104 may specifically include, but is not limited to, a processor (CPU), a memory, a control circuit, and the like. The executing component 106 may specifically include, but is not limited to, a moving component, a cleaning component, and the like.
In one embodiment, as shown in fig. 2, a cleaning method based on scene recognition is provided, which is exemplified by applying the method to the cleaning robot 100 in fig. 1, and includes the following steps:
step 202, an environment image corresponding to an area to be cleaned is acquired.
The area to be cleaned is the area to be cleaned which needs to be cleaned when the cleaning robot performs cleaning, and the area to be cleaned corresponds to the environment where the cleaning robot is located. For example, in practical applications, the cleaning robot is generally used in a house or an office, and the area to be cleaned may be an area range corresponding to the house or the office room. In one embodiment, the user or the cleaning robot may perform area division on a house or an office, and the area to be cleaned may be the whole area of the house or the office or a partial area of the whole area. The user refers to an owner or a user corresponding to the cleaning robot.
The area to be cleaned may be an area within the visual range of the cleaning robot that needs to be cleaned. For example, during the cleaning process of the cleaning robot, an area image in the visual range of the cleaning direction may be collected to obtain an environment image corresponding to the area to be cleaned. The cleaning direction is a moving direction of the cleaning robot during cleaning, and the visual range may be an effective detection range of the cleaning robot corresponding to the sensor. The area to be cleaned may also be the entire area to be cleaned. For example, the cleaning robot can move in the cleaning area through the moving assembly before starting cleaning, and an environment image corresponding to the whole cleaning area is acquired during the moving process.
In one embodiment, the cleaning robot may acquire an environment image corresponding to the entire cleaning region, detect whether the entire cleaning region includes a target region or not through the environment image, and detect that the cleaning region includes one or more target regions. The cleaning robot can uniformly clean the detected target area and then uniformly clean the area to be cleaned or the area to be cleaned which is not cleaned except the target area, so that the cleaning efficiency of the cleaning robot is improved.
The environment image is image data of an environment corresponding to the area to be cleaned, and the cleaning robot can acquire the environment image corresponding to the area to be cleaned through the corresponding sensor. The environmental image corresponding to the cleaning area may be an image of the cleaning robot in the visual range during the cleaning process, or an image of the entire cleaning area before the cleaning is started. When the sensor types corresponding to the sensors for acquiring the environment images are different, the data types corresponding to the environment images acquired by the sensors may also be different. For example, the cleaning robot may be provided with a camera, a laser sensor, an ultrasonic sensor, or the like. The cleaning robot can acquire video data in a visual range through the camera in the cleaning process. The cleaning robot may extract image frames from the video data as the environment image. The cleaning robot can also directly acquire image data in a visual range through the camera.
The cleaning robot can also acquire an environment image corresponding to an area to be cleaned through the laser sensor, and the environment image can be specifically a point cloud image according to the data type of the environment image acquired by the laser sensor. Specifically, the cleaning robot may transmit a detection signal, which may be a laser beam, to the area to be cleaned through the laser sensor. The cleaning robot can receive the signal reflected by the ground or the object in the area to be cleaned, and the reflected signal is compared with the detection signal to obtain the point cloud data corresponding to the area to be cleaned. The cleaning robot can perform data cleaning, point cloud segmentation, point cloud projection and other processing on the point cloud data to obtain a point cloud image corresponding to the area to be cleaned.
And 204, performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises the target area.
The cleaning robot can perform target detection on the environment image corresponding to the area to be cleaned to obtain a detection result corresponding to the environment image. Specifically, the cleaning robot may call an image detection model, input an environmental image into the image detection model, and perform target detection on the environmental image through the image detection model to obtain a detection result output by the image detection model. The image detection model may be pre-established and configured in the cleaning robot after being trained, so that the cleaning robot calls the image detection model to perform target detection, and detects whether the target area is included in the area to be cleaned based on vision. The image detection model may be established based on a target detection algorithm, and the target detection algorithm may specifically be any one or a combination of multiple target detection algorithms such as YOLO, fast-RCNN, CornerNet, MobileNet, or ssd (single Shot Multi Box detector). Plural may mean two or more.
The cleaning robot can obtain a detection result output by the image detection model, and judge whether the area to be cleaned comprises the target area according to the detection result. The target area refers to a floor soiled area in the area to be cleaned. For example, the target area may specifically include, but is not limited to, a coffee stain area, a soy stain area, a milk stain area, a sand stain area, and the like. The cleaning robot can carry out target detection based on vision to the environment image through the image detection model, whether there is dirty region in the detection area of treating cleaning to this more accurate detects dirty region, is favorable to the cleaning robot to carry out the pertinence cleanness to dirty region.
When the cleaning robot detects the target area from the environment image, the detection result is determined that the area to be cleaned includes the target area, and the detection result may further include description information corresponding to the target area. For example, the image detection model may select a target region in the environment image through a rectangular frame, and the description information may include a size and coordinates of the rectangular frame corresponding to the target region, and a plurality of types and respective type confidence degrees corresponding to the detected target region. The type confidence is used to indicate the likelihood that the area to be cleaned belongs to the corresponding area type. The detection result may further include an environmental object detected from the environmental image. The environmental object refers to a static object in an environment where the cleaning robot is located, and the environmental object is generally an object that moves infrequently in the surrounding environment. For example, the environmental object may specifically include, but is not limited to, an object such as a tea table, a sofa, a television, and a television cabinet. When it is determined that the area to be cleaned includes the target area, the cleaning robot may acquire the environmental object detected in the area to be cleaned from the detection result, so as to determine scene information corresponding to the area to be cleaned according to the environmental object.
In one embodiment, when the detection result indicates that the area to be cleaned does not include the target area, the cleaning robot may continue to acquire an environment image corresponding to a next area to be cleaned for target detection, or clean the area to be cleaned which does not include the target area.
And step 206, calling a scene recognition model, and carrying out scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned.
The cleaning robot can call the scene recognition model, perform scene recognition on the area to be cleaned by using the scene recognition model, and determine scene information corresponding to the area to be cleaned according to the environment object, wherein the scene information can comprise a scene type corresponding to the area to be cleaned. The cleaning robot may note a scene type corresponding to an area to be cleaned as an area scene. The area to be cleaned may be, for example, in particular, a type of scene in a living room, a bathroom, a balcony, a kitchen, a dining room, a bedroom, etc. in a house. When the environment object comprises a bathtub, the cleaning robot can determine that the area scene corresponding to the area to be cleaned is a toilet through the environment object.
The scene recognition model can be a network model established based on a neural network and obtained after training. For example, the scene recognition model may be a network model established based on Place-CNN (Convolutional neural networks), and the network architecture of the scene recognition model may be one of a plurality of network architectures corresponding to a Convolutional neural network. For example, the scene recognition model may be established based on ResNet (Residual Network). The cleaning robot can call the scene recognition model, input the environment object into the scene recognition model, perform scene recognition on the area to be cleaned through the association relation between the object and the scene obtained through training, and output the area scene corresponding to the area to be cleaned.
And step 208, determining the area type corresponding to the target area according to the area scene and the detection result.
Because the target area is a dirty area in the area to be cleaned, the detection difficulty of the dirty area is increased, and the situation that the detection result is inaccurate may exist only through image detection. Therefore, the cleaning robot can comprehensively determine the region type corresponding to the target region according to the region scene corresponding to the recognized region to be cleaned, the detection result corresponding to the target region and the region type corresponding to the target region according to the region scene and the image detection result, and therefore accuracy of recognizing the region type corresponding to the target region is improved. Specifically, the probability of the corresponding occurrence of a dirty region is different in different region scenes. For example, when the area to be cleaned is a balcony, the possibility of occurrence of a soil-type soiled area is higher than the possibility of occurrence of a soy sauce stain-type soiled area. When the area to be cleaned is a bedroom, there is a greater likelihood of a milk stain type of soiled area than a silt type of soiled area.
The detection result output by the image detection model may include a target region and type confidences respectively corresponding to the types corresponding to the plurality of target regions. The type confidence coefficient can be expressed in the form of decimal, percentage score and the like, and the type corresponding to the region can be set in advance according to the actual application requirement in a classified mode. The cleaning robot can obtain multiple type confidence degrees corresponding to the target area in the detection result, adjust the type confidence degrees according to the area scene, and determine the area type corresponding to the target area according to the adjusted type confidence degrees.
In one embodiment, the cleaning robot may acquire a plurality of region type weights corresponding to the region scene. There may be a correspondence between the region scene and the region type weights, and the scene type of each region may correspond to a different region type weight. The correspondence between the region scene and the region type weight may be obtained through big data analysis in advance. The region type weight may be used to indicate the possibility that the target region belongs to multiple region types respectively in the region scene, and the corresponding weights of the same region type in different scenes may be different. The cleaning robot can acquire a plurality of detection area types corresponding to the target area in the detection result, and perform weighting processing on the detection area types according to the area type weights to obtain area type scores corresponding to the plurality of area types. The cleaning robot may determine a target area type corresponding to the target area according to the area type score.
And 210, adjusting the cleaning strategy according to the area type, and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
The cleaning strategy refers to a cleaning rule set for the cleaning robot in advance, and the cleaning strategy specifically may include, but is not limited to, cleaning time, cleaning mode, cleaning times, cleaning frequency, and the like performed by the cleaning robot. When the target area belongs to different area types, the cleaning robot may perform the same or different adjustment on the cleaning strategy according to the area type to obtain an adjusted cleaning strategy corresponding to the area type, and control an execution component of the cleaning robot to execute a cleaning operation corresponding to the adjusted cleaning strategy on the target area. For example, when the cleaning robot detects a target area from the area to be cleaned, the cleaning robot may adjust a cleaning strategy to avoid the target area, and perform targeted cleaning on the target area after cleaning other areas in the area to be cleaned, or may perform corresponding cleaning operation on the target area first, and clean an uncleaned area in the area to be cleaned after cleaning the target area is completed. When the area type corresponding to the target area is the coffee stain type, the cleaning robot can improve the cleaning times, cleaning force and the like of the target area according to the area type, so that the cleaning strategy corresponding to the area type is adopted, and the target area is cleaned more effectively.
In this embodiment, the target detection is performed on the environment image by acquiring the environment image corresponding to the region to be cleaned, so as to detect whether the region to be cleaned includes the target region. When the detection result shows that the area to be cleaned comprises the target area, acquiring an environment object corresponding to the area to be cleaned, calling a scene recognition model, carrying out scene recognition on the area to be cleaned according to the environment object, obtaining an area scene corresponding to the area to be cleaned and output by the scene recognition model, and determining the area type corresponding to the target area according to the area scene and the detection result. The area type corresponding to the target area is determined based on scene recognition and image detection, and the accuracy of target area detection is effectively improved. The cleaning strategy is adjusted according to the area type, and corresponding cleaning operation is executed on the target area according to the adjusted cleaning strategy, so that effective targeted cleaning is performed on the target area according to the area type, and the cleaning efficiency of the cleaning robot is effectively improved.
In an embodiment, as shown in fig. 3, the step of calling the scene recognition model, performing scene recognition on the region to be cleaned according to the environment object, and obtaining the region scene corresponding to the region to be cleaned includes:
step 302, generating an object group matrix according to the environment object.
And 304, calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix.
And step 306, carrying out scene recognition according to the object group characteristics to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
When the environmental object is an object that may exist in a plurality of scenes, there may be an error in scene recognition through an association between a single environmental object and a scene. For example, there may be televisions in both the living room and the bedroom, and when the environment object is a television, it is impossible to accurately identify whether the corresponding area scene is the living room or the bedroom by the television. Therefore, the cleaning robot can generate an object group matrix according to the environment object and determine the area scene corresponding to the area to be cleaned according to the object group, so that the accuracy of identifying the scene type corresponding to the area to be cleaned is improved.
Specifically, the cleaning robot may combine the environmental objects detected in the environmental image to obtain a plurality of object groups, and each object group may include one or more environmental objects. In one embodiment, since too many environment objects in each object group may cause a large limitation to the scene, the number of environment objects in the object group may be one of 1, 2, 3, or 4. For example, the cleaning robot may arrange and combine a plurality of environment objects, and each two environment objects constitute one object group. Environmental objects in different object groups may be repeated. The incidence relation between the environment object and the scene can be more accurately expressed through the object group comprising a plurality of environment objects. For example, when a tea table and a sofa are included in the object group, the area scene corresponding to the area to be cleaned may be determined as a living room.
The cleaning robot may arrange the generated plurality of object groups in a matrix formation to generate an object group matrix. The cleaning robot can call the scene recognition model, input the object group matrix into the scene recognition model, and extract the characteristics of the object group matrix through the scene recognition model to obtain the object group characteristics corresponding to the object group matrix. The cleaning robot can classify the object group characteristics through the scene recognition model, so that the scene recognition is carried out according to the object group characteristics. Specifically, the scene recognition model may include but is not limited to classifiers such as an SVM (support vector machine), a DT (Decision Tree), an NBM (Naive bayesian model), or a classification neural network, and the cleaning robot may classify the region to be cleaned according to the object group characteristics to obtain the region scene corresponding to the region to be cleaned output by the scene recognition model.
In this embodiment, an object group matrix is generated by an environment object, a scene recognition model is called to perform feature extraction on the object group matrix, so as to obtain object group features corresponding to the object group matrix, and the incidence relation between the environment object and the scene can be more accurately expressed according to the object group features extracted by the object group matrix. And scene recognition is carried out according to the characteristics of the object group to obtain an area scene corresponding to the area to be cleaned, so that the accuracy of the scene recognition of the area to be cleaned is effectively improved.
In one embodiment, as shown in fig. 4, the method further includes:
step 402, obtaining user schedule information corresponding to a user identifier in a preset time period.
And 404, predicting according to the user schedule information corresponding to the user identifier to obtain the periodic behavior type corresponding to the user identifier.
And 406, adjusting a corresponding time sub-strategy of the cleaning robot according to the periodic behavior type.
The user identifier is identification information corresponding to a user of the cleaning robot, and the user may be an owner or a user of the cleaning robot. The preset time period may be a time length set according to an actual application requirement. For example, the preset time period may be set to two weeks, one month, or three months, etc., according to actual demand. The cleaning robot may acquire user schedule information corresponding to the user identification within a preset time period. Specifically, the cleaning robot may establish a communication connection with a user terminal corresponding to the user, and acquire the schedule information of the user through the user terminal. The schedule information of the user may specifically include the time of going out and going home each time, the number of times of going out and the length of time of going out, and the like, which correspond to the user.
The cleaning robot can call a pre-established and configured behavior prediction model, and the user schedule information corresponding to the user identification in the preset time period can be subjected to prediction analysis through the behavior prediction model. Specifically, the cleaning robot may obtain behavior data of a plurality of behavior feature dimensions from the user schedule information within a preset time period, generate a user behavior matrix using behavior features corresponding to the behavior data of the plurality of dimensions, and input the user behavior matrix into the behavior prediction model. And performing operation through the behavior prediction model, and outputting the periodic behavior type corresponding to the user identification. The cleaning robot can adjust the time sub-strategy through the periodic behavior type corresponding to the user identification, so that the cleaning robot can more intelligently, accurately and effectively clean the to-be-cleaned area of the user according to the adjusted cleaning strategy. Wherein the time sub-strategy is one of a plurality of sub-strategies included in a cleaning strategy corresponding to the cleaning robot. For example, the cleaning strategy may specifically include a time sub-strategy, a mode sub-strategy, a frequency sub-strategy, a number sub-strategy, and the like.
In the embodiment, the periodic behavior type corresponding to the user identifier is obtained by obtaining the user schedule information corresponding to the user identifier in the preset time period and performing predictive analysis according to the user schedule information, and the time sub-strategy corresponding to the cleaning robot is adjusted according to the periodic behavior type, so that the cleaning robot is controlled to perform cleaning operation according to the adjusted time sub-strategy, and the influence on the daily life of the user is avoided.
In one embodiment, the image detection model may be obtained by training the established standard detection model through training data. In order to save computational resources of the cleaning robot, the training of the image detection model may be performed by a server corresponding to the cleaning robot, and the server configures the trained image detection model in the cleaning robot. The server may be implemented by an independent server or a server cluster composed of a plurality of servers.
Specifically, the standard detection model may be a TFLite (a kind of open source deep learning framework for device-side inference) type model established based on a deep learning network MobileNet V1(efficient convolutional neural network for mobile vision application), and the standard detection model may also be established based on other deep learning networks. For example, the standard detection model may be specifically established based on networks such as VGG, ResNet (Residual Neural Network), RetinaNet, CornerNet-lite, YOLO, or SSD. In one embodiment, the standard detection model may be a model established based on a MobileNet-SSD algorithm. The MobileNet-SSD algorithm is a target detection algorithm for extracting image features through MobileNet, detecting an object frame by using an SSD frame, extracting the image features through deep separable convolution and effectively improving the calculation efficiency of the convolution network.
The training data may be image data collected by the model training personnel according to actual training requirements, or image data in a training database. For example, the image data used for model training may specifically be image data of a tensrflow (a database of open source codes) including a region to be identified and an object. The area to be identified refers to a soiled area within the area to be cleaned. For example, the dirty area may specifically include a coffee stain area, a soy stain area, a sand stain area, a milk stain area, and the like. The objects to be identified include object objects in the environment where the cleaning robot is located, and may include sofas, tea tables, dining tables, television cabinets and the like. After the image data used for training is acquired, the image data may be converted into TFRecord (a binary data format) format data, and the TFRecord format data may be input into a standard detection model as training data for model training. In one embodiment, the data used for training the standard test model may specifically include training data, validation data, and test data.
And (4) performing cyclic training on the standard detection model according to the training image until the training is converged to obtain the trained image detection model. For example, migration learning may be performed by using a Fine-Tune (model tuning), training a preset number of epochs (each Epoch represents training once using all training data), for example, 3 ten thousand epochs, and determining that training is converged when a loss value (loss) of the detected model is reduced to a preset value. The preset value may be a parameter value preset according to actual requirements, for example, the preset value may be set to 0.2. After the training is finished, the standard detection model with the converged training can be converted through a model conversion algorithm, and the standard detection model is converted into an image detection model with a preset format, wherein the image detection model can run in the cleaning robot so as to detect the environmental image collected by the sensor. The image detection model may specifically be a TFLite type detection model, for example.
In this embodiment, a standard detection model is established according to a target detection algorithm, the standard detection model is trained through a training image corresponding to the detection requirement of the region to be cleaned, the trained image detection model is converted and then configured in the cleaning robot, the cleaning robot can conveniently call the image detection model to perform target detection on an environmental image, and the method is beneficial to the accurate detection of a target region and an object included in the region to be cleaned based on vision by the cleaning robot.
In one embodiment, as shown in fig. 5, the method further includes:
step 502, when the detection result further includes the target object, obtaining the contour information corresponding to the target object.
And step 504, determining the object type corresponding to the target object according to the detection result and the contour information.
And step 506, when the object type belongs to the target type, controlling the cleaning robot to move the target object to a target position corresponding to the object type.
The target object refers to an object to be cleaned by the cleaning robot, and the target object may be specifically garbage to be cleaned in an area to be cleaned. When the target object is included in the object detected by the environment image, the cleaning robot may acquire contour information corresponding to the target object. The contour information may in particular be acquired by means of a laser sensor. Specifically, the laser sensor comprises a transmitting component and a receiving component. When it is determined that the target object is included in the region to be cleaned, the cleaning robot may determine an object position corresponding to the target object according to the detection frame coordinates in the detection result, and emit a laser beam to the target object through the emission assembly. The emitting assembly may emit laser beams in multiple directions. For example, the emitting component can emit the laser beam horizontally, and can also make an included angle exist between the laser beam and the horizontal plane, so as to acquire information in the vertical direction. In one embodiment, the laser sensor may include a plurality of emitting components, and the angles of the laser emitted by the different emitting components may be the same or different. The cleaning robot can receive the laser signal returned by the target object through the receiving component, and the laser sensor acquires the corresponding contour information of the target object. The contour information may specifically include a volume corresponding to the target object, a contour shape, and the like.
The cleaning robot can determine the object type corresponding to the target object according to the detection result corresponding to the environment image and the detected outline information corresponding to the target object. Specifically, the object types may include classification types corresponding to the target object in a plurality of classification dimensions. For example, sorting according to object volume dimensions may include trash that can be drawn into a trash receptacle and trash that cannot be drawn into a trash receptacle. The garbage is classified according to the garbage classification standard, and can comprise recyclable garbage, kitchen garbage, harmful garbage, other garbage and the like. The cleaning robot can adjust the confidence degree corresponding to the target object in the detection result according to the contour information corresponding to the target object, and determine the object type corresponding to the target object by combining the contour information and the detection result. For example, when the target object is a milk box, the cleaning robot may determine that the target object belongs to recyclable garbage that cannot be sucked into the garbage storage box. When the target object is silt, the cleaning robot can determine that the target object belongs to other garbage that can suck in the garbage storage box. Compared with the shape of the target object, the shape of the target object may be deformed, so that an error exists in a detection result obtained according to image detection, and by combining contour information and the detection result corresponding to the target object, the object type corresponding to the target object can be determined more accurately, and the accuracy of identifying the object type of the target object is effectively improved.
The target type is a garbage type in which the cleaning robot cannot suck the garbage collection box. When the object type belongs to the target type, a target position corresponding to the object type may be acquired, and the cleaning robot may be controlled to move the target object to the target position corresponding to the object type. The target position corresponding to the object type may be preset by a user, and the target position may be a preset position corresponding to the object type one to one. For example, recyclable garbage, kitchen garbage, harmful garbage and other garbage can be respectively and correspondingly arranged at different placement positions. The cleaning robot may move the target object to a target position corresponding to the object type. For example, when the target object is detected to be a deformed milk box or pop can, the object type corresponding to the target object is determined to be recoverable garbage which cannot be sucked into the garbage storage box, and the cleaning robot can push the deformed milk box or pop can to a preset target position, so that the uniform classification processing of the target object of the same object type by a user corresponding to the cleaning robot is facilitated.
In one embodiment, when the cleaning robot cannot move the target object to the corresponding target position, for example, when the weight of the target object is heavy, the cleaning robot may acquire the current position corresponding to the target object, and send information such as the object type and the current position corresponding to the target object to the user terminal corresponding to the cleaning robot, so as to prompt the user to clean the target object.
In one embodiment, when the object type does not belong to the target type, the cleaning robot may perform a cleaning process on the target object according to a cleaning policy corresponding to the object type. For example, when the target object is silt, the cleaning robot may suck the silt into the trash receptacle.
In this embodiment, when the detection result further includes the target object, the contour information corresponding to the target object is obtained, and the object type corresponding to the target object is determined according to the detection result and the contour information, so that an error generated when the object type is determined due to deformation of the target object is avoided, and accuracy of determining the object type is improved. When the object type belongs to the target type, the cleaning robot is controlled to move the target object to the target position corresponding to the object type, compared with the traditional mode, the target objects distributed at a plurality of positions do not need to be cleaned one by one manually, a user can clean the target object uniformly at the target position, and the cleaning efficiency of the cleaning robot is effectively improved.
In one embodiment, the method further comprises: when the detection result further comprises a dynamic object, determining an object position corresponding to the dynamic object according to the environment object; determining behavior track information corresponding to the dynamic object according to the object position; and generating an object moving area according to the behavior track information, and controlling the cleaning robot to perform corresponding cleaning operation on the object moving area.
The dynamic object refers to an object capable of moving by itself in the environment around the cleaning robot. For example, the dynamic object may be a pet in the area to be cleaned, and specifically may include a pet cat, a pet dog, a pet rabbit, and the like. The object position refers to the position of the dynamic object in the environment in the corresponding environment image. In different environment images, the object positions corresponding to the dynamic objects may be the same or different.
The cleaning robot may obtain environmental information corresponding to an ambient environment, where the environmental information may be pre-configured in the cleaning robot, or may be obtained by automatically detecting the ambient environment in which the cleaning robot is located during a working process. The environment information may specifically include, but is not limited to, a clean environment map of an environment in which the cleaning robot is located, environment objects included in the surrounding environment, and environment object positions corresponding to the environment objects. The environmental object location may be a location coordinate of the environmental object in the clean environment map. The cleaning robot can determine the object position corresponding to the dynamic object according to the relative position relationship between the dynamic object and the environment object in the environment image and the environment object position corresponding to the environment object.
The cleaning robot can determine the behavior track corresponding to the dynamic object according to the object positions in the plurality of environment objects. Specifically, the cleaning robot may map the respective object positions of the dynamic object in the plurality of environmental images to the clean environmental map, so as to obtain the environmental map coordinates of the dynamic object at the respective acquisition times of the plurality of environmental images. The cleaning robot can connect a plurality of environment map coordinates corresponding to the dynamic object according to the sequence of the acquisition time corresponding to the plurality of environment images, and fit the behavior track according to the environment image coordinates to obtain the behavior track corresponding to the dynamic object.
The cleaning robot can determine an object moving area corresponding to the dynamic object according to the behavior track, and perform targeted cleaning on the object moving area. Specifically, the cleaning robot may acquire an object cleaning policy corresponding to the dynamic object, where the object cleaning policy may specifically include, but is not limited to, a cleaning manner, a cleaning frequency, a cleaning force, a cleaning time, and the like for an active area of the object. The object cleaning strategies for different dynamic objects may be different. And controlling the cleaning robot to correspondingly clean the moving area of the object according to the object cleaning strategy corresponding to the dynamic object.
In one embodiment, the cleaning robot may obtain a dynamic object type corresponding to the dynamic object, and the dynamic object type may include a specific variety corresponding to the dynamic object. For example, the dynamic object types may specifically include alaska dogs, golden retrievers, samoyer dogs, german shepherd dogs, autumine dogs, corgi dogs, french bulldog, and the like. The cleaning robot may obtain an object behavior feature corresponding to the object type, the object behavior feature may be obtained by performing big data analysis according to a plurality of pieces of dynamic object behavior information corresponding to the object type, and the object behavior features corresponding to different object types may be different. The cleaning robot can enlarge and adjust the activity area corresponding to the behavior track according to the behavior characteristics of the object corresponding to the object type, and the adjusted activity area is recorded as the object activity area corresponding to the dynamic object. The object activity area adjusted according to the object behavior characteristics can more accurately represent the actual activity and the area range of possible activity of the dynamic object, and the accuracy of determining the object activity area is effectively improved.
In this embodiment, when the detection result corresponding to the environment image further includes the dynamic object, the object position corresponding to the dynamic object is determined according to the environment object, the behavior track corresponding to the dynamic object is determined according to the object position in the environment image, the object activity area corresponding to the dynamic object is determined according to the behavior track, and the cleaning robot is controlled to clean the object activity area, so that the object activity area corresponding to the dynamic object is accurately and effectively cleaned in a targeted manner, and the cleaning efficiency of the cleaning robot is effectively improved.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 6, there is provided a cleaning apparatus based on scene recognition, including: an image detection module 602, a scene recognition module 604, a type determination module 606, and an area cleaning module 608, wherein:
an image detection module 602, configured to obtain an environment image corresponding to an area to be cleaned; and performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises the target area.
And the scene identification module 604 is configured to invoke a scene identification model, perform scene identification on the area to be cleaned according to the environment object, and obtain an area scene corresponding to the area to be cleaned.
A type determining module 606, configured to determine a region type corresponding to the target region according to the region scene and the detection result.
And the area cleaning module 608 is configured to adjust a cleaning strategy according to an area type, and perform a corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
In one embodiment, the scene recognition module 604 is further configured to generate an object group matrix according to the environment object; calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix; and carrying out scene recognition according to the characteristics of the object group to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
In an embodiment, the type determining module 606 is further configured to obtain a plurality of region type weights corresponding to the region scenes; obtaining a plurality of detection area types corresponding to the target area in the detection result, and performing weighting processing on the detection area types according to the area type weight to obtain area type scores; and determining the type of the target area from the detection area types according to the area type scores.
In an embodiment, the area cleaning module 608 is further configured to obtain user schedule information corresponding to a user identifier in a preset time period; predicting according to the user schedule information corresponding to the user identification to obtain a periodic behavior type corresponding to the user identification; and adjusting the corresponding time sub-strategy of the cleaning robot according to the periodic behavior type.
In an embodiment, the area cleaning module 608 is further configured to obtain contour information corresponding to the target object when the detection result further includes the target object; determining an object type corresponding to the target object according to the detection result and the contour information; and controlling the cleaning robot to move the target object to a target position corresponding to the object type when the object type belongs to the target type.
In one embodiment, the area cleaning module 608 is further configured to determine an object position corresponding to the dynamic object according to the environment object when the detection result further includes the dynamic object; determining behavior track information corresponding to the dynamic object according to the object position; and generating an object moving area according to the behavior track information, and controlling the cleaning robot to perform corresponding cleaning operation on the object moving area.
For specific definition of the cleaning device based on scene recognition, the above definition of the cleaning method based on scene recognition can be referred to, and is not repeated herein. The various modules in the above-described scene recognition-based cleaning apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the cleaning robot, and can also be stored in a memory in the cleaning robot in a software form, so that the processor can call and execute the corresponding operations of the modules.
In one embodiment, a computer device is provided, which may be a cleaning robot. The cleaning robot includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the cleaning robot processor is configured to provide computing and control capabilities. The storage of the cleaning robot comprises a nonvolatile storage medium and an internal storage. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the cleaning robot is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a cleaning robot method. The display screen of the cleaning robot can be a liquid crystal display screen or an electronic ink display screen, and the input device of the cleaning robot can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the cleaning robot, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, a cleaning robot is provided, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above-mentioned embodiments of the cleaning method based on scene recognition when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned scene recognition based cleaning method embodiment.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A method for cleaning based on scene recognition, the method comprising:
acquiring an environment image corresponding to an area to be cleaned;
performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises a target area; the environment object is a static object in the area to be cleaned, which has an incidence relation with a scene corresponding to the area to be cleaned;
calling a scene recognition model, and carrying out scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned;
according to the weights of the multiple area types corresponding to the area scenes, carrying out weighting processing on the multiple detection area types corresponding to the target area in the detection results, and determining the area type corresponding to the target area according to the weighting processing results;
and adjusting the cleaning strategy according to the area type, and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
2. The method according to claim 1, wherein the calling a scene recognition model, and performing scene recognition on the area to be cleaned according to the environment object to obtain an area scene corresponding to the area to be cleaned comprises:
generating an object group matrix according to the environment object;
calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix;
and carrying out scene recognition according to the object group characteristics to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
3. The method according to claim 1, wherein the weighting, according to the weights of the plurality of area types corresponding to the area scenes, the plurality of detection area types corresponding to the target area in the detection result, and the determining, according to the weighting result, the area type corresponding to the target area includes:
acquiring a plurality of region type weights corresponding to the region scenes;
obtaining a plurality of detection area types corresponding to the target area in a detection result, and performing weighting processing on the detection area types according to the area type weight to obtain area type scores;
and determining a target area type from the detection area types according to the area type scores.
4. The method of claim 1, further comprising:
acquiring user schedule information corresponding to a user identifier in a preset time period;
predicting according to the user schedule information corresponding to the user identification to obtain a periodic behavior type corresponding to the user identification;
and adjusting a corresponding time sub-strategy of the cleaning robot according to the periodic behavior type.
5. The method of any one of claims 1 to 4, further comprising:
when the detection result further comprises a target object, acquiring contour information corresponding to the target object;
determining an object type corresponding to the target object according to the detection result and the contour information;
controlling the cleaning robot to move the target object to a target position corresponding to the object type when the object type belongs to the target type.
6. The method of any one of claims 1 to 4, further comprising:
when the detection result further comprises a dynamic object, determining an object position corresponding to the dynamic object according to the environment object;
determining behavior track information corresponding to the dynamic object according to the object position;
and generating an object moving area according to the behavior track information, and controlling the cleaning robot to perform corresponding cleaning operation on the object moving area.
7. A scene recognition based cleaning device, the device comprising:
the image detection module is used for acquiring an environment image corresponding to an area to be cleaned; performing target detection on the environment image, and acquiring an environment object corresponding to the area to be cleaned when the detection result shows that the area to be cleaned comprises a target area; the environment object is a static object in the area to be cleaned, which has an incidence relation with a scene corresponding to the area to be cleaned;
the scene recognition module is used for calling a scene recognition model, carrying out scene recognition on the area to be cleaned according to the environment object, and obtaining an area scene corresponding to the area to be cleaned;
a type determining module, configured to perform weighting processing on multiple detection region types corresponding to the target region in the detection result according to multiple region type weights corresponding to the region scene, and determine a region type corresponding to the target region according to a weighting processing result;
and the area cleaning module is used for adjusting the cleaning strategy according to the area type and executing corresponding cleaning operation on the target area according to the adjusted cleaning strategy.
8. The apparatus of claim 7, wherein the scene recognition module is further configured to generate an object group matrix from the environmental objects; calling a scene recognition model, and extracting features according to the scene recognition model to obtain object group features corresponding to the object group matrix; and carrying out scene recognition according to the object group characteristics to obtain an area scene corresponding to the area to be cleaned and output by the scene recognition model.
9. The apparatus of claim 7,
the type determining module is further configured to obtain a plurality of region type weights corresponding to the region scenes; obtaining a plurality of detection area types corresponding to the target area in a detection result, and performing weighting processing on the detection area types according to the area type weight to obtain area type scores; and determining a target area type from the detection area types according to the area type scores.
10. The apparatus of claim 7,
the area cleaning module is also used for acquiring user schedule information corresponding to the user identification in a preset time period; predicting according to the user schedule information corresponding to the user identification to obtain a periodic behavior type corresponding to the user identification; and adjusting a corresponding time sub-strategy of the cleaning robot according to the periodic behavior type.
11. The apparatus according to any one of claims 7 to 10,
the area cleaning module is further configured to acquire contour information corresponding to a target object when the detection result further includes the target object; determining an object type corresponding to the target object according to the detection result and the contour information; controlling the cleaning robot to move the target object to a target position corresponding to the object type when the object type belongs to the target type.
12. The apparatus according to any one of claims 7 to 10,
the area cleaning module is further used for determining an object position corresponding to a dynamic object according to the environment object when the detection result further comprises the dynamic object; determining behavior track information corresponding to the dynamic object according to the object position; and generating an object moving area according to the behavior track information, and controlling the cleaning robot to perform corresponding cleaning operation on the object moving area.
13. A cleaning robot comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 6 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010455863.3A 2020-05-26 2020-05-26 Cleaning method and device based on scene recognition, cleaning robot and storage medium Active CN111568314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010455863.3A CN111568314B (en) 2020-05-26 2020-05-26 Cleaning method and device based on scene recognition, cleaning robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010455863.3A CN111568314B (en) 2020-05-26 2020-05-26 Cleaning method and device based on scene recognition, cleaning robot and storage medium

Publications (2)

Publication Number Publication Date
CN111568314A CN111568314A (en) 2020-08-25
CN111568314B true CN111568314B (en) 2022-04-26

Family

ID=72119372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010455863.3A Active CN111568314B (en) 2020-05-26 2020-05-26 Cleaning method and device based on scene recognition, cleaning robot and storage medium

Country Status (1)

Country Link
CN (1) CN111568314B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111012261A (en) * 2019-11-18 2020-04-17 深圳市杉川机器人有限公司 Sweeping method and system based on scene recognition, sweeping equipment and storage medium
CN112056992B (en) * 2020-09-07 2021-07-23 珠海格力电器股份有限公司 Cleaning method and device for cleaning robot, cleaning robot and storage medium
CN112056993B (en) * 2020-09-07 2022-05-17 上海高仙自动化科技发展有限公司 Cleaning method, cleaning device, electronic equipment and computer-readable storage medium
CN112515536B (en) * 2020-10-20 2022-05-03 深圳市银星智能科技股份有限公司 Control method and device of dust collection robot and dust collection robot
CN112315383B (en) * 2020-10-29 2022-08-23 上海高仙自动化科技发展有限公司 Inspection cleaning method and device for robot, robot and storage medium
CN112971615A (en) * 2021-02-03 2021-06-18 追创科技(苏州)有限公司 Control method of intelligent cleaning equipment and intelligent cleaning equipment
CN112926512B (en) * 2021-03-25 2024-03-15 深圳市无限动力发展有限公司 Environment type identification method and device and computer equipment
CN113012149B (en) * 2021-04-14 2024-03-15 北京铁道工程机电技术研究所股份有限公司 Intelligent cleaning robot path planning method and system
CN115413959A (en) * 2021-05-12 2022-12-02 美智纵横科技有限责任公司 Operation method and device based on cleaning robot, electronic equipment and medium
CN115530675A (en) * 2021-08-10 2022-12-30 追觅创新科技(苏州)有限公司 Cleaning method and device for mobile robot, storage medium and electronic device
CN113598656B (en) * 2021-08-10 2022-10-18 追觅创新科技(苏州)有限公司 Cleaning method and device for mobile robot, storage medium and electronic device
US20220107642A1 (en) * 2021-12-17 2022-04-07 Intel Corporation Smart sanitation robot
CN114532919B (en) * 2022-01-26 2023-07-21 深圳市杉川机器人有限公司 Multi-mode target detection method and device, sweeper and storage medium
CN116636776A (en) * 2022-02-16 2023-08-25 追觅创新科技(苏州)有限公司 Method and device for determining cleaning strategy, storage medium and electronic device
WO2023169117A1 (en) * 2022-03-11 2023-09-14 追觅创新科技(苏州)有限公司 Control method for cleaning apparatus, cleaning apparatus and storage medium
CN220141540U (en) * 2022-05-20 2023-12-08 苏州宝时得电动工具有限公司 Cleaning robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102306709B1 (en) * 2014-08-19 2021-09-29 삼성전자주식회사 Robot cleaner, control apparatus, control system, and control method of robot cleaner
CN108107892B (en) * 2017-12-22 2020-12-25 重庆秉为科技有限公司 Intelligent cleaning instrument control method
US10878294B2 (en) * 2018-01-05 2020-12-29 Irobot Corporation Mobile cleaning robot artificial intelligence for situational awareness
CN111012261A (en) * 2019-11-18 2020-04-17 深圳市杉川机器人有限公司 Sweeping method and system based on scene recognition, sweeping equipment and storage medium
CN111067428B (en) * 2019-12-23 2020-12-25 珠海格力电器股份有限公司 Cleaning method, storage medium and cleaning equipment
CN111166249A (en) * 2020-02-28 2020-05-19 科沃斯机器人股份有限公司 Control method of self-moving robot, self-moving robot and water tank assembly

Also Published As

Publication number Publication date
CN111568314A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111568314B (en) Cleaning method and device based on scene recognition, cleaning robot and storage medium
CN111543902B (en) Floor cleaning method and device, intelligent cleaning equipment and storage medium
CN111643010B (en) Cleaning robot control method and device, cleaning robot and storage medium
Jiang et al. FLYOLOv3 deep learning for key parts of dairy cow body detection
US11450146B2 (en) Gesture recognition method, apparatus, and device
US11789545B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
CN111643011A (en) Cleaning robot control method and device, cleaning robot and storage medium
CN108304882B (en) Image classification method and device, server, user terminal and storage medium
CN111657798B (en) Cleaning robot control method and device based on scene information and cleaning robot
CN111643014A (en) Intelligent cleaning method and device, intelligent cleaning equipment and storage medium
JP5726125B2 (en) Method and system for detecting an object in a depth image
CN111012261A (en) Sweeping method and system based on scene recognition, sweeping equipment and storage medium
CN111643017B (en) Cleaning robot control method and device based on schedule information and cleaning robot
US11825278B2 (en) Device and method for auto audio and video focusing
KR102043341B1 (en) Method and system for noise-robust sound-based respiratory disease detection
US11654554B2 (en) Artificial intelligence cleaning robot and method thereof
CN109963072B (en) Focusing method, focusing device, storage medium and electronic equipment
CN112379781B (en) Man-machine interaction method, system and terminal based on foot information identification
US10917721B1 (en) Device and method of performing automatic audio focusing on multiple objects
US11200419B1 (en) Enhanced object state discrimination for robots
CN114532918A (en) Cleaning robot, target detection method and device thereof, and storage medium
CN114764902A (en) Behavior recognition method and device and storage medium
CN107562050B (en) Method and system for robot to recognize environment
CN112022003A (en) Sweeping robot, control method and device thereof, and computer-readable storage medium
CN111476195A (en) Face detection method, face detection device, robot and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant