CN113331743A - Method for cleaning floor by cleaning robot and cleaning robot - Google Patents

Method for cleaning floor by cleaning robot and cleaning robot Download PDF

Info

Publication number
CN113331743A
CN113331743A CN202110642604.6A CN202110642604A CN113331743A CN 113331743 A CN113331743 A CN 113331743A CN 202110642604 A CN202110642604 A CN 202110642604A CN 113331743 A CN113331743 A CN 113331743A
Authority
CN
China
Prior art keywords
cleaning
image
ground
cleaning robot
floor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110642604.6A
Other languages
Chinese (zh)
Inventor
林睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Lantu Technology Co ltd
Original Assignee
Suzhou Lantu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Lantu Technology Co ltd filed Critical Suzhou Lantu Technology Co ltd
Priority to CN202110642604.6A priority Critical patent/CN113331743A/en
Publication of CN113331743A publication Critical patent/CN113331743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Abstract

The present disclosure provides a method of cleaning a floor by a cleaning robot, comprising: acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground; acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information; when the cleanliness is lower than a threshold value, acquiring geometric information including the position, the shape and the area of a region to be cleaned; converting the geometric information of the area to be cleaned from the image coordinates of the image coordinate system into the global coordinates of the robot coordinate system; marking the geometric information of the area to be cleaned on a global map; and cleaning the area to be cleaned based on the marking and the cleaning strategy. The disclosure also provides the cleaning robot and a method for determining the cleanliness of the ground.

Description

Method for cleaning floor by cleaning robot and cleaning robot
Technical Field
The present disclosure relates to a method of cleaning a floor by a cleaning robot, and a floor cleanliness determination method.
Background
At present, cleaning robots on the market are popular among users, such as household floor sweeping robots which are widely applied in consumption scenes, and cleaning robots which are applied in public areas such as hotels, airports, stations, office buildings and the like. The main work of the cleaning robot is to realize the cleaning of the ground in the operation scene, such as sweeping, dust collection, mopping and other functions, and the key points are that the cleaning effect is good, the cleaning efficiency is high, and people do not need to participate as much as possible.
However, the current cleaning robot only performs cleaning for preset times with preset power consumption for each area according to a preset or planned full-coverage route. However, the cleaning robot in the prior art does not determine the cleanliness of the floor, which results in that the cleaning robot does not completely know the effect after actual cleaning, and especially when the floor needs to be powered up or cleaned for many times due to very dirty floor, the cleaning robot is not intelligent, and only the designated area of the cleaning robot is cleaned, or all areas are cleaned at each time, and no important dirty area is cleaned, which results in low cleaning efficiency of the cleaning robot, and the cleaning effect is often unsatisfactory.
Disclosure of Invention
In order to solve at least one of the above technical problems, the present disclosure provides a method of cleaning a floor by a cleaning robot, and a floor cleanliness determination method.
According to an aspect of the present disclosure, there is provided a method of cleaning a floor using a cleaning robot, including:
acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground;
acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information;
when the cleanliness is lower than a threshold value, acquiring geometric information including the position, the shape and the area of a region to be cleaned;
converting the geometric information of the area to be cleaned from the image coordinates of the image coordinate system into the global coordinates of the robot coordinate system;
marking the position of the area to be cleaned on a global map; and the number of the first and second groups,
and cleaning the area to be cleaned based on the mark and the cleaning strategy.
The method of cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, the obtaining an image before the cleaning robot cleans the floor and an image after the cleaning robot cleans the floor, includes: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and the number of the first and second groups,
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
The method for cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, wherein acquiring a floor cleanliness based on the image before cleaning the floor, the image after cleaning the floor, and the floor material information, comprises:
extracting image features based on the image before cleaning the ground and the image after cleaning the ground;
acquiring ground material information based on the image before cleaning the ground, the image after cleaning the ground, the image characteristics and the deep learning model;
calculating an overlap area based on the images before and after cleaning the floor; and the number of the first and second groups,
and calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information.
The method for cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, wherein calculating a coincidence region based on the image before cleaning the floor and the image after cleaning the floor, includes:
and on the basis that Euclidean distances between the image center coordinates in front of the cleaning ground and behind the cleaning ground relative to the global pose of the cleaning robot are smaller than a preset value, wherein the preset value is determined on the basis of the observation areas of the first camera and the second camera.
The method for cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, the calculating a coincidence region based on the image before cleaning the floor and the image after cleaning the floor, further comprising:
and carrying out image transformation on the image before the floor cleaning and the image after the floor cleaning, wherein the image transformation comprises size scaling and/or rotation transformation and affine transformation.
The method for cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, wherein calculating the cleanliness of the overlapped area based on the overlapped area and the floor material information, comprises:
and adopting different cleanliness calculation methods according to different materials, and calculating at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction of the overlapped area to obtain the cleanliness.
The method for cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, wherein calculating the cleanliness of the overlapped area based on the overlapped area and the floor material information, comprises:
for the overlapped area, searching a sub-area matched with the size of the template through template matching; and the number of the first and second groups,
and calculating by adopting different cleanliness calculation methods according to different materials and calculating by counting at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction in the sub-region to obtain the cleanliness.
A method of cleaning a floor by a cleaning robot according to at least one embodiment of the present disclosure, the marking a position of the area to be cleaned on a global map, includes:
and marking different areas on a user marking layer of the global map according to the actual use condition.
The method for cleaning the floor by the cleaning robot according to at least one embodiment of the present disclosure, the cleaning the area to be cleaned based on the marking and cleaning strategy, includes:
and cleaning the area to be cleaned according to the preset cleaning times and cleaning force based on the current ground material and the corresponding cleanliness.
According to still another aspect of the present disclosure, there is provided a cleaning robot including:
the first camera is arranged in front of the cleaning robot, is obliquely downward and is used for acquiring an uncleaned ground image in front of the cleaning robot;
the second camera is arranged behind the cleaning robot, is obliquely downward and is used for acquiring an image of the ground cleaned behind the cleaning robot;
an image processing apparatus comprising a processor and a memory, the memory of the image processing apparatus for storing an executable program, the processor of the image processing apparatus for executing the executable program stored by the memory, wherein the executable program comprises: the cleaning robot system comprises a cleaning robot, a scene recognition model, a scene positioning model and a control module, wherein the cleaning robot is used for acquiring an image coordinate of an uncleaned ground image and a cleaned ground image, extracting image characteristics, calculating a coincidence region, acquiring a region to be cleaned through the scene recognition model, and converting geometric information of the region to be cleaned from the image coordinate into a coordinate of a global coordinate system of the cleaning robot through the scene positioning model; and the number of the first and second groups,
a controller device comprising a processor and a memory, the memory of the controller device for storing an executable program, the processor of the controller device for executing the executable program stored by the memory, wherein the executable program comprises: and marking the area to be cleaned in the global map by the positioning marking model, and controlling the cleaning robot to clean based on the mark.
According to still another aspect of the present disclosure, there is provided a cleaning robot, further comprising:
the first camera compensation light source is used for compensating the light source when the first camera shoots images; and the number of the first and second groups,
and the second camera compensation light source is used for compensating the light source when the second camera shoots the image.
According to still another aspect of the present disclosure, there is provided a ground cleanliness determination method including:
acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground; and the number of the first and second groups,
and acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information.
According to the method for determining the cleanliness of the floor provided by the embodiment of the disclosure, the acquiring of the image before the cleaning robot cleans the floor and the image after the cleaning robot cleans the floor includes: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and the number of the first and second groups,
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
According to the method for determining the cleanliness of the ground, which is provided by the embodiment of the disclosure, the method for obtaining the cleanliness of the ground based on the image before the ground is cleaned, the image after the ground is cleaned and the ground material information comprises the following steps:
extracting image features based on the image before cleaning the ground and the image after cleaning the ground;
acquiring ground material information based on the image before cleaning the ground, the image after cleaning the ground, the image characteristics and the deep learning model;
calculating an overlap area based on the images before and after cleaning the floor; and the number of the first and second groups,
and calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information.
According to the method for determining the cleanliness of the floor provided by the embodiment of the disclosure, the calculating of the overlapping area based on the image before cleaning the floor and the image after cleaning the floor includes:
and on the basis that Euclidean distances between the image center coordinates in front of the cleaning ground and behind the cleaning ground relative to the global pose of the cleaning robot are smaller than a preset value, wherein the preset value is determined on the basis of the observation areas of the first camera and the second camera.
According to the method for determining the cleanliness of the floor provided by the embodiment of the present disclosure, the calculating the overlapping area based on the image before cleaning the floor and the image after cleaning the floor further includes:
and carrying out image transformation on the image before the floor cleaning and the image after the floor cleaning, wherein the image transformation comprises size scaling and/or rotation transformation and affine transformation.
According to the method for determining the cleanliness of the ground provided by the embodiment of the disclosure, the calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information includes:
and adopting different cleanliness calculation methods according to different materials, and calculating at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction of the overlapped area to obtain the cleanliness.
According to the method for determining the cleanliness of the ground provided by the embodiment of the disclosure, the calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information includes:
for the overlapped area, searching a sub-area matched with the size of the template through template matching; and the number of the first and second groups,
and calculating by adopting different cleanliness calculation methods according to different materials and calculating by counting at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction in the sub-region to obtain the cleanliness.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow diagram of a method of cleaning a floor by a cleaning robot according to one embodiment of the present disclosure.
Fig. 2 is a schematic structural view of a cleaning robot according to an embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method for determining the cleanliness of a ground surface according to an embodiment of the present disclosure.
DrawingsDescription of the marks
1000 cleaning robot
1001 image processing apparatus
1002 controller device
1003 first camera
1004 second camera
1005 first camera compensating light source
1006 the second camera compensates for the light source.
Detailed Description
The present disclosure will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limitations of the present disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the illustrated exemplary embodiments/examples are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Accordingly, unless otherwise indicated, features of the various embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.
The use of cross-hatching and/or shading in the drawings is generally used to clarify the boundaries between adjacent components. As such, unless otherwise noted, the presence or absence of cross-hatching or shading does not convey or indicate any preference or requirement for a particular material, material property, size, proportion, commonality between the illustrated components and/or any other characteristic, attribute, property, etc., of a component. Further, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While example embodiments may be practiced differently, the specific process sequence may be performed in a different order than that described. For example, two processes described consecutively may be performed substantially simultaneously or in reverse order to that described. In addition, like reference numerals denote like parts.
When an element is referred to as being "on" or "on," "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element, there are no intervening elements present. For purposes of this disclosure, the term "connected" may refer to physically, electrically, etc., and may or may not have intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising" and variations thereof are used in this specification, the presence of stated features, integers, steps, operations, elements, components and/or groups thereof are stated but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximate terms and not as degree terms, and as such, are used to interpret inherent deviations in measured values, calculated values, and/or provided values that would be recognized by one of ordinary skill in the art.
Fig. 1 is a method of cleaning a floor by a cleaning robot according to one embodiment of the present disclosure.
As shown in fig. 1, a method S100 of cleaning a floor by a cleaning robot includes:
s102: acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground;
s104: acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information;
s106: when the cleanliness is lower than a threshold value, acquiring geometric information including the position, the shape and the area of a region to be cleaned;
s108: converting the geometric information of the area to be cleaned from the image coordinates of the image coordinate system into the global coordinates of the robot coordinate system;
s110: marking the geometric information of the area to be cleaned on a global map; and the number of the first and second groups,
s112: and cleaning the area to be cleaned based on the marking and the cleaning strategy.
Wherein obtaining an image of the cleaning robot before cleaning the floor and an image of the cleaning robot after cleaning the floor includes: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and the number of the first and second groups,
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
Wherein, S104: based on the image before cleaning the ground and the image after cleaning the ground, and the ground material information, acquire ground cleanliness, include:
extracting image features based on the image before cleaning the ground and the image after cleaning the ground;
acquiring ground material information based on an image before cleaning the ground, an image after cleaning the ground, image characteristics and a deep learning model;
calculating a coincidence region based on the images and image characteristics before and after cleaning the ground; and the number of the first and second groups,
and calculating the cleanliness of the overlapped area based on the overlapped area and the ground material information.
The image feature inclusion is realized based on a feature detection algorithm, and specifically includes but is not limited to feature descriptors: SIFT, SURF, ORB, FAST, HOG, HAAR, LBP or a deep learning algorithm: any one or combination of multiple characteristic detection algorithms such as TensorFlow, YOLO, Faster-RCNN, etc.
Wherein, ground material includes: different ground materials such as hard floor, epoxy terrace, short hair carpet, long hair carpet, the discernment of ground material obtains through the degree of depth learning model, and the degree of depth learning model can be obtained after the image sample training of gathering various ground materials, also can obtain after the training sample training of the image characteristic that the image that will gather various ground materials was drawed as the degree of depth learning model.
Wherein, based on the image before cleaning the ground, the image after cleaning the ground and the image characteristics, calculating an overlapping area, comprising:
and the Euclidean distance between the central coordinates of the images before and after the ground is cleaned and the global pose of the cleaning robot is smaller than a preset value, and the preset value is determined based on the observation areas of the first camera and the second camera.
Firstly, matching key feature information of the first camera and the second camera to obtain affine transformation models of images of the first camera and the second camera, and then further determining and counting the number of pixel points of the overlapped area before and after cleaning according to global optimal pose estimation information identified by the images of the first camera and the second camera, so as to obtain the area size of the overlapped area. The same cameras are generally selected in consideration of the physical view ranges, the installation positions and the like of the first camera and the second camera, the installation positions are symmetrical front and back relative to the cleaning robot, for simplicity, only the areas included in the four corners of the first camera and the second camera need to be calculated, or the size of the area observed on the ground is fixed because the cameras are fixed and known, only the distance between the center coordinates of the images acquired by the first camera and the second camera and the global optimal pose of the cleaning robot needs to be calculated, and specifically, the pixel points meeting the following expressions are taken as the coincidence area:
f([xf,i yf,i θf,i]T,[xb,j yb,j θb,j]T)<dist
wherein each parameter has the following meaning:
[xf,i yf,i θf,i]Trepresenting the global optimal pose of the cleaning robot marked by the first camera image at i time frame;
[xb,j yb,j θb,j]Trepresenting the global optimal pose of the cleaning robot marked by the second camera image at the j time frame;
f (-) represents the Euclidean physical distance calculation function of the global optimal poses marked by the first camera and the second camera; dist, which represents the size of the region observed by the camera, and is a determined numerical value after the camera is fixed.
Considering the installation positions of the first camera and the second camera, the moving direction and speed of the cleaning robot and the planned cleaning task route, a certain time difference is needed for the same ground area observed by the first camera and the second camera, namely the ground area observed by the second camera is delayed by a plurality of time frames from the same ground area observed by the first camera. The global positioning of the cleaning robot is performed and calculated in real time, the mounting positions of the first camera and the second camera with respect to the cleaning robot are also known, and it is possible to calculate in real time whether the second camera observes a region of the floor coinciding with that observed by the first camera, and the size of the coinciding region.
Wherein, based on the image before cleaning the ground and the image after cleaning the ground, calculate the overlap area, still include:
and carrying out image transformation on the image before cleaning the floor and the image after cleaning the floor, wherein the image transformation comprises size scaling and/or rotation transformation and affine transformation.
The method comprises the following steps of carrying out rotation and offset equal-scale transformation on images acquired by a first camera and a second camera, protecting the consistent original points of image coordinate systems of overlapped areas, and calculating rotation and scaling transformation, wherein a general calculation formula is as follows:
Figure BDA0003107622470000091
wherein, the physical meaning of each parameter is as follows:
x, representing image coordinates before and after the scaling;
y, representing the image coordinates before and after the scaling;
r, representing zoom size, related to the physical mounting angle of the camera;
α, which represents a rotation angle, related to a physical mounting angle of the camera;
u and v represent the offset of the first camera and the second camera on the 2D image coordinate system;
r, alpha, u and v are related to physical installation parameters of the first camera and the second camera of the cleaning robot and can be directly obtained by a calibration method of the cameras. And corresponding to the global optimal pose [ x ] of the cleaning robot marked by the imagef,i yf,iθf,i]TAnd [ x ]b,j yb,j θb,j]TA parameter of interest;
wherein, can be with the first camera, the installation longitudinal symmetry of second camera, then corresponding parameter value can be expressed as:
r=1
α=θf,ib,j
u=xf,i-xb,j+d
v=yf,i-yb,j
wherein, the physical meaning of each parameter is as follows:
d represents a fixed parameter related to the width of the cleaning robot and the installation angle and height of the camera;
the meaning of pi is the rotating angle of the first camera and the second camera which are symmetrically arranged in front and back;
u and v represent the offset of the first camera and the second camera on the 2D image coordinate system;
wherein, based on coincidence zone and ground material information, calculate coincidence zone's cleanliness, include:
and calculating at least one of the image gray scale change of the overlapping area, the color image pixel variance and the gray scale image variance after edge extraction to obtain the cleanliness by adopting different cleanliness calculation methods according to different ground materials. Generally, for the ground of most commercial scenes, the judgment of the cleanliness of the ground is to identify and locate a local ground area which has larger variation correlation between texture, color and the like compared with the surrounding area. Considering different ground surfaces, such as different materials of carpet, epoxy floor, hard bottom plate, cement road surface, etc., the texture and color change of the ground surfaces are different. For example, for epoxy terraces and indoor hard floors in general factories and workshop environments, the roughness of the ground surface is small, the color change is small, the texture is simple or the consistency is good, and the original color pixel variance of the image area and the gray level image pixel variance transformation after edge extraction are both small. For example, for the carpet of a hotel and other materials, the texture is more, the image gray scale change is larger, and the original color pixel variance in the image area and the gray scale pixel variance transformation after edge extraction are both larger. At present, the conventional method for detecting the ground material can be effectively judged by installing a special material sensor or a visual identification method. The material sensor is not limited to the ultrasonic sensor and the like, and the method of visual recognition is not limited to the method based on deep learning training, such as Tensorflow and Yolo. For different ground materials, the area gray level mean value, the variance value and the threshold value of each area gray level are different when the cleanliness is calculated.
Different calculations are carried out according to the ground cleanliness of different materials, and the calculation can be comprehensively judged by judging the image gray level change of the overlapped area, the original color pixel variance and the gray level image pixel variance after edge extraction:
Figure BDA0003107622470000111
Figure BDA0003107622470000112
Figure BDA0003107622470000113
Figure BDA0003107622470000114
Figure BDA0003107622470000115
wherein, the physical meaning of each parameter is as follows:
If,u,vand Ib,u,vRespectively representing the overlapping areas of the original gray level images of the first camera and the second camera
Figure BDA0003107622470000116
Corresponding pixel point of [ u v]The gray value of (a);
Figure BDA0003107622470000121
and
Figure BDA0003107622470000122
respectively representing the overlapped areas of the extracted gray images of the first camera and the second camera
Figure BDA0003107622470000123
Corresponding pixel point of [ u v]The gray value of (a);
vf
Figure BDA0003107622470000124
and vb
Figure BDA0003107622470000125
Respectively overlapping regions of the original gray level images and the edge-extracted gray level images of the first camera and the second camera
Figure BDA0003107622470000126
A calculated variance;
Σ, representing the overlapping area
Figure BDA0003107622470000127
The statistical calculation formula of the image gray level change;
ib
Figure BDA0003107622470000128
if
Figure BDA0003107622470000129
respectively representing the calculation results of the formula, and respectively corresponding to the original gray scale of the first camera and the image overlapping region after the edge extraction
Figure BDA00031076224700001210
Corresponding pixel point of [ u v]The arithmetic mean of the gray values of the second camera, the original gray value of the second camera and the image overlapping area after the edge extraction
Figure BDA00031076224700001211
Corresponding pixel point of [ u v]The arithmetic mean of the gray values of (a);
vb
Figure BDA00031076224700001212
vf
Figure BDA00031076224700001213
respectively representing the calculation results of the formula, and respectively corresponding to the original gray scale of the first camera and the image overlapping region after the edge extraction
Figure BDA00031076224700001214
Corresponding pixel point of [ u v]The arithmetic variance value of the gray value of the second camera, the overlapping area of the original gray value of the second camera and the image after the edge extraction
Figure BDA00031076224700001215
Corresponding pixel point of [ u v]The arithmetic variance value of the gray values of (a).
Wherein, based on coincidence zone and ground material information, calculate coincidence zone's cleanliness, include:
for the overlapped area, searching a sub-area matched with the size of the template through template matching; and the number of the first and second groups,
and calculating by adopting different cleanliness calculation methods according to different materials and calculating by counting at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction in the sub-region to obtain the cleanliness.
For the ground materials with more texture such as carpet, because the small area dirt is difficult to detect in real time, the application of an image template matching method to the large area dirt can be considered, namely, the overlapped area is extracted
Figure BDA00031076224700001216
A small area with a certain size, such as 20X20 pixels, is used as a template, matching areas with the same size are searched in the first camera and the second camera, and the matching areas are used as sub-areas.
Wherein, according to different materials, different cleanliness calculation methods are adopted, and cleanliness is obtained by calculating at least one of image gray scale change, color image pixel variance and gray scale image variance after edge extraction in a statistical sub-region, which is the same as the cleanliness calculation method, for example: the method for calculating the variance of the original color pixels is used for judging and counting the overlapped area
Figure BDA00031076224700001217
When the number of all sub-regions satisfying the match is larger than a certain threshold (if | v |)b-vfI > threshold, i.e. considered as sub-region matching), the area to be cleaned is determined, wherein the threshold can be considered as a threshold value related to the ground material, and can be generally confirmed in advance through a certain experimental means. Further, in order to make the determination accurate, a method of dynamically adjusting the size of the sub-region may be adopted for calculation and determination.
Wherein, S108: converting the geometric information of the area to be cleaned from the image coordinates of the image coordinate system into global coordinates of the robot coordinate system, comprising:
acquiring global optimal pose estimation in real time; and the number of the first and second groups,
based on the global optimal pose estimation, the geometric information of the area to be cleaned is converted from coordinates in the image coordinate system to coordinates in the global coordinates of the robot coordinate system.
Global pose-most estimate [ x ] obtained in real time based on cleaning robotk yk θk]TPosition of area to be cleaned [ u ]k,i vk,i]TAnd magnitude Γk,iThe isogeometric information is converted into a global coordinate system, wherein the subscript k denotes a time frame, i denotes the sequence number of the area to be cleaned, and the current time frame may detect the existence of N (N > 1) areas to be cleaned.
The cleaning robot executes a cleaning task in a scene, needs to know the accurate position of the cleaning robot at the current moment in real time, and can only know which position to go, so that efficient full-coverage path planning can be carried out, and the ground in a working scene can be guaranteed to be covered and cleaned. When the cleaning robot executes a cleaning task, the cleaning robot may not have any scene global map in advance, and at this time, the cleaning robot may perform cleaning while constructing the scene map and perform global positioning, that is, a Simultaneous map and map construction process (SLAM). In addition, a scene global map may be stored in advance, and in this case, the cleaning robot performs positioning based on the known global map while cleaning. Whether a scene global map is stored or not, the cleaning robot needs to perform global positioning all the time, namely, the global optimal pose estimation in a working scene is obtained in real time, and the cleaning robot is considered to be 3 degrees of freedom in the scene generally, namely, the offset x in the x direction and the y direction in a global coordinate systemkAnd ykAlso, the course angle θk. The global positioning may be performed by Extended Kalman Filter (EKF), Bayesian Filter (BF), Particle Filter (PF), Monte Carlo positioning Algorithm (MCL), etcLocalization), ORB-SLAM, VINS (Visual-Inertial Navigation System), or other positioning algorithm implementation.
Wherein the geometry of the area to be cleaned is gammak,iThe shape of the circular, elliptical or polygonal shape can be characterized as follows:
if characterized as a circular region, Γk,iCan be characterized as
Figure BDA0003107622470000141
riIs a radius, whose expression is:
[uk,i vk,ik,i[uk,i vk,i]T=1
if characterized as elliptical regions, Γk,iCan be characterized as
Figure BDA0003107622470000142
The expression is as follows:
[uk,i vk,ik,i[uk,i vk,i]T=1
wherein the major and minor axes of the ellipse λi,j(j ═ 1,2) and corresponding angle θi,j(j ═ 1,2) is:
Figure BDA0003107622470000143
Figure BDA0003107622470000144
Figure BDA0003107622470000145
wherein in the above formula, ai、bi、ciRespectively the geometry of the area to be cleanedk,iAnd the matrix expression corresponds to the element values of the upper left corner, the lower right corner and the diagonal line. Major axis of the ellipse being λi,1Minor axis of λi,2Correspond toRespectively is thetai,1And thetai,2
If characterized as a polygonal region, Γk,iCan be characterized as
Figure BDA0003107622470000146
Each vertex image coordinate [ u ] of polygon with each row element characterization serial number ii,j vi,j]The line segments between the vertices may be sides of polygons, and m represents the number of vertices of a polygon, and for example, if m is a rectangle, m is 4.
Converting geometric information such as the central position and size of a circle, an ellipse or a polygon into a global coordinate system, considering that the installation direction of the camera is obliquely downward installation, and the included angle between the camera and the horizontal direction is alphacAn offset amount with respect to the center of the cleaning robot is [ x ]c yc zc]T,zcIndicating the height at which the camera is mounted, the final center position uk,i vk,i]TPosition [ p ] in the global coordinate systemk,i qk,i]TComprises the following steps:
Figure BDA0003107622470000147
Figure BDA0003107622470000151
Figure BDA0003107622470000152
Figure BDA0003107622470000153
wherein, the physical meaning of each parameter is as follows:
Figure BDA0003107622470000154
representing the coordinates of the image center coordinates in a robot coordinate system;
Figure BDA0003107622470000155
calculating an intermediate value for coordinate transformation;
[uc vc]T1/2 expressed as image center coordinates, typically at length and width resolution;
[fu fv]Tis represented by the formulacThe associated conversion scale factor needs to be calibrated to determine the camera;
finally, the coordinates of the ith cleanliness area center of the k time frame in the global coordinate system can be obtained. For the sake of simplicity, the outline of an image region, such as a circle and an ellipse, is converted from image coordinates to global coordinates by an approximate affine transformation, where the conversion formula is expressed as follows:
Figure BDA0003107622470000156
wherein, the physical meaning of each parameter is as follows:
xc、yc、zcand alphacRepresenting camera mounting parameters respectively representing a forward (X coordinate direction), a leftward (Y coordinate direction), an upward offset (Z coordinate direction) of an origin of a camera image coordinate system with respect to an origin of a cleaning robot coordinate system, and a rotation angle along a Y axis;
a. b, c, d, e, f, representing the camera mounting parameter xc、yc、zcAnd alphacRelated parameters characterizing the rotation matrix
Figure BDA0003107622470000157
And translation vector
Figure BDA0003107622470000158
Requiring direct acquisition by camera calibration methods, e.g. tensor definition, etc。
For the outline of the image region, for example, the image coordinate of the polygonal region is converted into the global coordinate, since each line of the region characterization rectangle is the image discrete coordinate, a conversion formula and a method similar to the image center coordinate can be adopted, and details are not repeated here.
Wherein the geometric information of the area to be cleaned is marked on a global map, comprising:
and marking different areas on a user marking layer of the global map according to the actual use condition.
The global map can be represented by a 2D grid map, a 2D geometric map, a 2D topological map or a 3D point cloud map. The global map can be divided into the following 2 layers without limitation: a basic map layer and a user mark layer. The basic image layer and the user mark layer are generally 2D arrays in processing, and can be directly stored in a memory in a picture form or a json and other file formats.
The basic image layer is mainly an original data image layer for recording and expressing a scene sensor, is directly constructed through a sensor arranged on the cleaning robot, can be constructed by using an SLAM algorithm, and can also be directly generated by reading a layout in a scene. In the basic graphic layer, 1 generally represents the position of an obstacle such as a wall, a table and the like, the cleaning robot is forbidden to pass, 0 represents the position of no obstacle, the cleaning robot can pass, and for an undetermined area, 1 represents Chiu,vRepresenting corresponding map coordinates u, v]TThe value of an element, expressed as:
Figure BDA0003107622470000161
the user marking layer mainly enables a user to configure marking layers in forbidden zones, forbidden lines, charging pile positions, dirty areas, multiple cleaning areas and the like. And marking the area to be cleaned at the corresponding position and area size on a user marking layer of the global map.
Using the obtained geometric information in the global coordinate system to perform corresponding marking on the global map. And marking the low-cleanliness area at the corresponding position and area size of the user marking layer of the global map. Of course, the number of times cleaning has been performed or the like may be further increased as an auxiliary mark in consideration of a special dirty area. For example, the base layer for a conventional two-dimensional grid map may be characterized as:
Figure BDA0003107622470000162
zhongchiu,vRepresenting grid points [ u, v ] in a scene 2D Global map]TThe value of (a) is,
Figure BDA0003107622470000163
is a two-dimensional map grid point set.
The user mark layer is also characterized by 2D with the basic layer, mainly records the configuration mark of the user on the position corresponding to the basic layer, is also a global map in broad sense, can be merged with the basic layer, and can be characterized as follows:
Figure BDA0003107622470000164
the values of the parameters are respectively corresponding to map coordinates [ u, v ] according to a forbidden zone, a forbidden line, a charging pile position, a zone to be cleaned or a dirty zone, a multi-time cleaning zone and the like]TValue of the element psiu,vGiven a value between 1 and 255,. psi.u,v=0。
Wherein, based on the marking and the cleaning strategy, cleaning the area to be cleaned comprises:
and cleaning the area to be cleaned according to the preset cleaning times and cleaning force based on the current ground material and the corresponding cleanliness.
The cleaning strategy is stored in the internal memory of the controller device as a parameter list configured in advance by the user or as a default, for example in the form of a json file. The json file for storing the cleaning strategy is mainly a collection of 'name/value' pairs, and establishes a strategy and a standard basis for judging secondary or multi-time cleaning by using area positions, sizes, Cleanliness thresholds of secondary cleaning, accumulated cleaning Times, continuous cleaning Times, Special areas and the like of a user in a user marking layer respectively represented by 'Coordinate', 'Size', 'clearness', 'time', 'Special', and the like. When the position and the area of the area with low Cleanliness fall into the set "Coordinate" and "Size" of the user in the user marking layer, the cleaning robot is instructed to perform secondary or multiple cleaning according to the subsequent configured continuous cleaning Times "Times" and the like until the Cleanliness of the area is greater than "Cleanlines", if the cleaning is performed for the secondary cleaning "Times", the Cleanliness is greater than "Cleanlines", the "Special" is marked as 0, otherwise, the area is marked as 1, and the area is a Special dirty area. Of course, it is possible to determine whether it is possible to confirm that the cleanliness low region can apply the cleaning policy for the set region by simply counting the number of positions of the grid in the same position as the grid of the cleanliness low region in a certain set region of the user mark layer in the global grid map.
And commanding the cleaning robot to perform secondary or multiple cleaning on the low-cleanliness area in a targeted manner according to the position and the area size of the marked low-cleanliness area in the global map and after a unified set cleaning task is executed according to a preset cleaning strategy, for example, considering the record of the executed cleaning times as an auxiliary identifier, or within a certain cleaning time range, on the basis of reaching the certain requirement of the cleanliness of the low-cleanliness area.
Fig. 2 is a schematic structural view of a cleaning robot according to an embodiment of the present disclosure.
As shown in fig. 2, the cleaning robot 1000 includes:
a first camera 1003 installed in front of the cleaning robot, facing obliquely downward, for acquiring an image of an uncleaned floor in front of the cleaning robot;
a second camera 1004 installed at the rear of the cleaning robot, facing obliquely downward, for acquiring an image of the ground cleaned at the rear of the cleaning robot;
an image processing apparatus 1001 including a processor and a memory, the memory of the image processing apparatus 1001 being configured to store an executable program, the processor of the image processing apparatus 1001 being configured to execute the executable program stored in the memory, wherein the executable program includes: the cleaning robot system comprises a cleaning robot, a scene recognition model, a scene positioning model and a control module, wherein the cleaning robot is used for acquiring an image coordinate of an uncleaned ground image and a cleaned ground image, extracting image characteristics, calculating a coincidence region, acquiring a region to be cleaned through the scene recognition model, and converting geometric information of the region to be cleaned from the image coordinate into a coordinate of a global coordinate system of the cleaning robot through the scene positioning model;
a controller device 1002 comprising a processor and a memory, the memory of the controller device 1002 for storing an executable program, the processor of the controller device 1002 for executing the executable program stored by the memory, wherein the executable program comprises: marking the area to be cleaned in a global map through a positioning marking model, and controlling a cleaning robot to clean based on the marking;
a first camera compensation light source 1005 for compensating a light source when the first camera 1003 captures an image; and the number of the first and second groups,
the second camera compensation light source 1006 is used for compensating the light source when the second camera 1004 takes an image.
Wherein the installation angle of the first camera 1003 is in a range from a horizontal forward angle to a vertical downward angle.
Wherein, the installation angle of the second camera 1004 is in the range of horizontal backward to vertical downward angle.
The image processing apparatus 1001 may include, but is not limited to, a single chip microcomputer, an ARM, a DSP, an FPGA, a CPU, a memory, a control circuit, and the like.
The controller device 1002 may include, but is not limited to, a single chip, an ARM, a DSP, an FPGA, a CPU, a memory, a control circuit, and the like.
And the scene recognition model calculates the cleanliness of the coincidence area based on the coincidence area and the ground material, and determines the area to be cleaned according to the cleanliness.
The scene positioning model is used for converting the coordinates of the area to be cleaned in the image coordinate system into the coordinates in the global coordinate system.
The positioning and marking model is used for marking the area to be cleaned in the global map.
The scene recognition model, the scene positioning model and the positioning mark model are respectively executable programs, are stored in a memory of the device and can be executed by a processor, so that the scene recognition, the scene positioning or the positioning mark function of the corresponding models is realized.
When acquiring images, the first camera 1003 and the second camera 1004 control the camera compensating light source by combining with the brightness of the shooting scene, that is, the switch is controlled by a timing interval trigger mode, and the first camera compensating light source 1005 and the second camera compensating light source 1006 are turned on when acquiring images.
In order to reduce power consumption during the cleaning task, the cleaning robot 1000 processes the ground images of the corresponding areas of the ground in front of and behind the cleaning robot, which are acquired by the first camera 1003 and the second camera 1004, through the image processing device 1001, and calls the scene recognition model and the scene positioning model. The image processing apparatus 1001 is installed inside the cleaning robot.
The controller device 1002 is responsible for invoking the position mark model and for controlling the cleaning task to be performed. The controller device 1002 stores a global map of the work scene, or can construct a global map of the work scene at the same time when performing the cleaning task.
The controller device 1002 performs real-time global positioning while controlling the cleaning robot to perform a cleaning task, obtains global optimal pose estimation information of the cleaning robot 1000, and sends the information to the image processing device 1001 through the communication interface in real time, so as to perform global pose labeling on scene images obtained by the first camera and the second camera at corresponding times.
After the controller device 1002 executes the unified set cleaning task, the cleaning robot is directed to perform secondary or multiple cleaning on the low-cleanliness area (i.e., the area to be cleaned) in a targeted manner according to a preset cleaning strategy, for example, by considering the floor material and the record of the number of times of cleaning has been executed as an auxiliary identifier, or within a certain range of cleaning times, to meet a certain requirement for the cleanliness of the low-cleanliness area.
Fig. 3 is a flowchart illustrating a method for determining the cleanliness of a ground surface according to an embodiment of the present disclosure.
As shown in fig. 3, a method S200 for determining the cleanliness of a ground includes:
s201: the method comprises the steps that a first camera and a second camera respectively acquire an image of the cleaning robot before the cleaning robot cleans the ground and an image of the cleaning robot after the cleaning robot cleans the ground;
s202: extracting the acquired characteristic information of the image before cleaning the ground and the image after cleaning the ground;
s203: analyzing image characteristic information based on the image before cleaning the ground and the image after cleaning the ground, and acquiring ground material information by combining a deep learning algorithm;
s204: calculating a coincidence region based on the acquired image and the image characteristics;
s205: judging whether the area of the overlapped area is larger than a threshold value or not; and the number of the first and second groups,
s206: and if the area of the overlapped area is larger than the threshold value, calculating the cleanliness of the overlapped area based on the information of the overlapped area and the ground material.
Wherein obtaining an image of the cleaning robot before cleaning the floor and an image of the cleaning robot after cleaning the floor includes: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and the number of the first and second groups,
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
The image feature extraction is realized based on a feature detection algorithm, and specifically includes but is not limited to feature descriptors: SIFT, SURF, ORB, FAST, HOG, HAAR, LBP.
Wherein, based on the acquired image and image characteristics, calculating an overlap region, comprising:
and the Euclidean distance between the central coordinates of the images before and after the ground is cleaned and the global pose of the cleaning robot is smaller than a preset value, and the preset value is determined based on the observation areas of the first camera and the second camera.
Specifically, the distance between the deviation of the central coordinates of the images acquired by the first camera and the second camera relative to the global optimal pose of the cleaning robot is set as an overlapping area, and pixel points satisfying the following expressions:
f([xf,i yf,i θf,i]T,[xb,j yb,j θb,j]T)<dist
wherein each parameter has the following meaning:
[xf,i yf,i θf,i]Trepresenting the global optimal pose of the cleaning robot marked by the first camera image at i time frame;
[xb,j yb,j θb,j]Trepresenting the global optimal pose of the cleaning robot marked by the second camera image at the j time frame;
f (-) represents the Euclidean physical distance calculation function of the global optimal poses marked by the first camera and the second camera; dist, which represents the size of the region observed by the camera, and is a determined numerical value after the camera is fixed.
Wherein, based on the image and the image characteristic obtained, the calculation of the overlapping area further comprises:
and carrying out image transformation on the image before cleaning the floor and the image after cleaning the floor, wherein the image transformation comprises size scaling and/or rotation transformation and affine transformation.
Wherein, based on coincidence zone and ground material information, calculate coincidence zone's cleanliness, include:
and adopting different cleanliness calculation methods according to different materials, and calculating at least one of the image gray scale change of the overlapping area, the color image pixel variance and the gray scale image variance after edge extraction to obtain the cleanliness.
Wherein, based on coincidence zone and ground material information, calculate coincidence zone's cleanliness, include:
for the overlapped area, searching a sub-area matched with the size of the template through template matching; and the number of the first and second groups,
and calculating by adopting different cleanliness calculation methods according to different materials and calculating by counting at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction in the sub-region to obtain the cleanliness.
For avoiding redundancy, the first embodiment, that is, the details of the method for cleaning the floor by the cleaning robot, can be referred to for further details of each step of the method for determining the cleanliness of the floor in this embodiment.
Therefore, the method for cleaning the ground by using the cleaning robot and the cleaning robot can effectively improve the intelligence of the cleaning robot in cleaning the ground, judge and position the area with low cleanliness after cleaning, and enable the cleaning robot to pertinently execute a secondary cleaning task according to the set cleaning strategy, thereby improving the cleaning efficiency and effect of the ground and solving the problems of poor cleaning effect, unclean and the like caused by the problems of too dirty ground or cleaning capability of the cleaning robot and the like.
In the description herein, reference to the description of the terms "one embodiment/implementation," "some embodiments/implementations," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/implementation or example is included in at least one embodiment/implementation or example of the present application. In this specification, the schematic representations of the terms described above are not necessarily the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of illustration of the disclosure and are not intended to limit the scope of the disclosure. Other variations or modifications may occur to those skilled in the art, based on the foregoing disclosure, and are still within the scope of the present disclosure.

Claims (10)

1. A method of cleaning a floor surface by a cleaning robot, comprising:
acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground;
acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information;
when the cleanliness is lower than a threshold value, acquiring geometric information including the position, the shape and the area of a region to be cleaned;
converting the geometric information of the area to be cleaned from the image coordinates of the image coordinate system into the global coordinates of the robot coordinate system;
marking the position of the area to be cleaned on a global map; and
and cleaning the area to be cleaned based on the mark and the cleaning strategy.
2. A method of cleaning a floor by a cleaning robot according to claim 1, wherein the obtaining the image before the cleaning robot cleans the floor and the image after the cleaning robot cleans the floor comprises: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
3. The method for cleaning floor by using robot according to claim 1, wherein the acquiring of the cleanliness of the floor based on the image before cleaning the floor and the image after cleaning the floor and the information on the material of the floor comprises:
extracting image features based on the image before cleaning the ground and the image after cleaning the ground;
acquiring ground material information based on the image before cleaning the ground, the image after cleaning the ground, the image characteristics and the deep learning model;
calculating an overlap area based on the images before and after cleaning the floor; and
and calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information.
4. A method for cleaning a floor by a cleaning robot according to claim 3, wherein the calculating the cleanliness of the overlapped area based on the overlapped area and the floor material information comprises:
and adopting different cleanliness calculation methods according to different materials, and calculating at least one of the image gray scale change, the color image pixel variance and the gray scale image variance after edge extraction of the overlapped area to obtain the cleanliness.
5. A method of cleaning a floor surface with a cleaning robot as described in claim 1, wherein said cleaning an area to be cleaned based on said marking and cleaning strategy comprises:
and cleaning the area to be cleaned according to the preset cleaning times and cleaning force based on the current ground material and the corresponding cleanliness.
6. A cleaning robot, characterized by comprising:
the first camera is arranged in front of the cleaning robot, is obliquely downward and is used for acquiring an uncleaned ground image in front of the cleaning robot;
the second camera is arranged behind the cleaning robot, is obliquely downward and is used for acquiring an image of the ground cleaned behind the cleaning robot;
an image processing apparatus comprising a processor and a memory, the memory of the image processing apparatus for storing an executable program, the processor of the image processing apparatus for executing the executable program stored by the memory, wherein the executable program comprises: the cleaning robot system comprises a cleaning robot, a scene recognition model, a scene positioning model and a control module, wherein the cleaning robot is used for acquiring an image coordinate of an uncleaned ground image and a cleaned ground image, extracting image characteristics, calculating a coincidence region, acquiring a region to be cleaned through the scene recognition model, and converting geometric information of the region to be cleaned from the image coordinate into a coordinate of a global coordinate system of the cleaning robot through the scene positioning model; and
a controller device comprising a processor and a memory, the memory of the controller device for storing an executable program, the processor of the controller device for executing the executable program stored by the memory, wherein the executable program comprises: and marking the area to be cleaned in the global map by the positioning marking model, and controlling the cleaning robot to clean based on the mark.
7. The cleaning robot of claim 6, further comprising:
the first camera compensation light source is used for compensating the light source when the first camera shoots images; and the number of the first and second groups,
and the second camera compensation light source is used for compensating the light source when the second camera shoots the image.
8. A method for determining the cleanliness of a ground, comprising:
acquiring an image of the cleaning robot before cleaning the ground and an image of the cleaning robot after cleaning the ground; and
and acquiring the cleanliness of the ground based on the image before cleaning the ground, the image after cleaning the ground and the ground material information.
9. The method of determining the cleanliness of a floor according to claim 8, wherein the obtaining of the image before the cleaning robot cleans the floor and the image after the cleaning robot cleans the floor includes: when the cleaning robot performs a cleaning task while moving,
acquiring an image of an uncleaned ground in front of the cleaning robot by a first camera installed in front of the cleaning robot; and
an image of the cleaned floor behind the cleaning robot is acquired by a second camera installed behind the cleaning robot.
10. The method of determining the cleanliness of a floor according to claim 8, wherein obtaining the cleanliness of the floor based on the image before cleaning the floor, the image after cleaning the floor, and the floor material information includes:
extracting image features based on the image before cleaning the ground and the image after cleaning the ground;
acquiring ground material information based on the image before cleaning the ground, the image after cleaning the ground, the image characteristics and the deep learning model;
calculating an overlap area based on the images before and after cleaning the floor; and the number of the first and second groups,
and calculating the cleanliness of the overlapping area based on the overlapping area and the ground material information.
CN202110642604.6A 2021-06-09 2021-06-09 Method for cleaning floor by cleaning robot and cleaning robot Pending CN113331743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642604.6A CN113331743A (en) 2021-06-09 2021-06-09 Method for cleaning floor by cleaning robot and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642604.6A CN113331743A (en) 2021-06-09 2021-06-09 Method for cleaning floor by cleaning robot and cleaning robot

Publications (1)

Publication Number Publication Date
CN113331743A true CN113331743A (en) 2021-09-03

Family

ID=77475704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642604.6A Pending CN113331743A (en) 2021-06-09 2021-06-09 Method for cleaning floor by cleaning robot and cleaning robot

Country Status (1)

Country Link
CN (1) CN113331743A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114653678A (en) * 2022-03-21 2022-06-24 江苏省人民医院(南京医科大学第一附属医院) Intelligent mobile directional cleaning system and method
CN114831567A (en) * 2022-03-31 2022-08-02 苏州三六零机器人科技有限公司 Method, device and equipment for intelligently selecting cleaning path and readable storage medium
CN115027908A (en) * 2022-06-30 2022-09-09 河南中烟工业有限责任公司 Automatic cleaning system of belt conveyor
CN115429175A (en) * 2022-09-05 2022-12-06 北京云迹科技股份有限公司 Cleaning robot control method, cleaning robot control device, electronic device, and medium
WO2024016839A1 (en) * 2022-07-21 2024-01-25 深圳银星智能集团股份有限公司 Secondary cleaning method and apparatus, and cleaning robot and storage medium
WO2024051704A1 (en) * 2022-09-07 2024-03-14 云鲸智能(深圳)有限公司 Cleaning robot and control method and apparatus therefor, and system and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807649A (en) * 2017-11-28 2018-03-16 广东工业大学 A kind of sweeping robot and its cleaning method, device
CN109330501A (en) * 2018-11-30 2019-02-15 深圳乐动机器人有限公司 A kind of method and sweeping robot cleaning ground
CN111643014A (en) * 2020-06-08 2020-09-11 深圳市杉川机器人有限公司 Intelligent cleaning method and device, intelligent cleaning equipment and storage medium
US20210007572A1 (en) * 2019-07-11 2021-01-14 Lg Electronics Inc. Mobile robot using artificial intelligence and controlling method thereof
CN112890683A (en) * 2021-01-13 2021-06-04 美智纵横科技有限责任公司 Cleaning method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807649A (en) * 2017-11-28 2018-03-16 广东工业大学 A kind of sweeping robot and its cleaning method, device
CN109330501A (en) * 2018-11-30 2019-02-15 深圳乐动机器人有限公司 A kind of method and sweeping robot cleaning ground
US20210007572A1 (en) * 2019-07-11 2021-01-14 Lg Electronics Inc. Mobile robot using artificial intelligence and controlling method thereof
CN111643014A (en) * 2020-06-08 2020-09-11 深圳市杉川机器人有限公司 Intelligent cleaning method and device, intelligent cleaning equipment and storage medium
CN112890683A (en) * 2021-01-13 2021-06-04 美智纵横科技有限责任公司 Cleaning method, device, equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114653678A (en) * 2022-03-21 2022-06-24 江苏省人民医院(南京医科大学第一附属医院) Intelligent mobile directional cleaning system and method
CN114831567A (en) * 2022-03-31 2022-08-02 苏州三六零机器人科技有限公司 Method, device and equipment for intelligently selecting cleaning path and readable storage medium
CN115027908A (en) * 2022-06-30 2022-09-09 河南中烟工业有限责任公司 Automatic cleaning system of belt conveyor
WO2024016839A1 (en) * 2022-07-21 2024-01-25 深圳银星智能集团股份有限公司 Secondary cleaning method and apparatus, and cleaning robot and storage medium
CN115429175A (en) * 2022-09-05 2022-12-06 北京云迹科技股份有限公司 Cleaning robot control method, cleaning robot control device, electronic device, and medium
WO2024051704A1 (en) * 2022-09-07 2024-03-14 云鲸智能(深圳)有限公司 Cleaning robot and control method and apparatus therefor, and system and storage medium

Similar Documents

Publication Publication Date Title
CN113331743A (en) Method for cleaning floor by cleaning robot and cleaning robot
CN107981790B (en) Indoor area dividing method and sweeping robot
EP3951544A1 (en) Robot working area map constructing method and apparatus, robot, and medium
CN109074083B (en) Movement control method, mobile robot, and computer storage medium
Baltzakis et al. Fusion of laser and visual data for robot motion planning and collision avoidance
CN110801180B (en) Operation method and device of cleaning robot
Borrmann et al. A mobile robot based system for fully automated thermal 3D mapping
CN104536445B (en) Mobile navigation method and system
US20160189419A1 (en) Systems and methods for generating data indicative of a three-dimensional representation of a scene
Yang et al. Ransac matching: Simultaneous registration and segmentation
CN106569489A (en) Floor sweeping robot having visual navigation function and navigation method thereof
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
KR101333496B1 (en) Apparatus and Method for controlling a mobile robot on the basis of past map data
Hochdorfer et al. 6 DoF SLAM using a ToF camera: The challenge of a continuously growing number of landmarks
CN111487980B (en) Control method of intelligent device, storage medium and electronic device
Bormann et al. Autonomous dirt detection for cleaning in office environments
CN112085838A (en) Automatic cleaning equipment control method and device and storage medium
CN114047753B (en) Obstacle recognition and obstacle avoidance method of sweeping robot based on deep vision
Koch et al. Wide-area egomotion estimation from known 3d structure
Bormann et al. Fast and accurate normal estimation by efficient 3d edge detection
CN110515386A (en) A kind of intelligent robot
CN110044358A (en) Method for positioning mobile robot based on live field wire feature
Roychoudhury et al. Plane segmentation in organized point clouds using flood fill
Huang et al. Fast initialization method for monocular slam based on indoor model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination