CN114569007A - Intelligent sweeping method of sweeping robot - Google Patents

Intelligent sweeping method of sweeping robot Download PDF

Info

Publication number
CN114569007A
CN114569007A CN202210188705.5A CN202210188705A CN114569007A CN 114569007 A CN114569007 A CN 114569007A CN 202210188705 A CN202210188705 A CN 202210188705A CN 114569007 A CN114569007 A CN 114569007A
Authority
CN
China
Prior art keywords
image
ground
follows
cleaning
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210188705.5A
Other languages
Chinese (zh)
Inventor
李志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210188705.5A priority Critical patent/CN114569007A/en
Publication of CN114569007A publication Critical patent/CN114569007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of intelligent home furnishing, in particular to an intelligent sweeping method of a sweeping robot. The problem of the robot of sweeping the floor to complicated ground environment recognition rate not high, can not clear away ground rubbish high-efficiently is solved. The method comprises the following steps: step 1, scanning and storing a ground image; step 2, starting a cleaning mode and shooting; step 3, preprocessing the image, and marking a suspicious region; step 4, cleaning and counting the suspicious regions; step 5, detecting whether the cleaning is finished or not; step 6, detecting whether the ground is available or not; and 7, judging whether the clearing frequency exceeds a threshold value, if not, continuing clearing, and if the clearing frequency exceeds the threshold value or is equal to the threshold value, sending warning information, and then performing the step 2. The maximum cleaning times of the same place are limited by setting a threshold value, so that the sweeping robot is prevented from cleaning the same place; by sending the warning information, all people of the sweeping robot can know the sweeping condition, and garbage which is difficult to treat can be conveniently swept.

Description

Intelligent sweeping method of sweeping robot
Technical Field
The invention relates to an intelligent wire cutting and peeling machine, in particular to an intelligent sweeping method of a sweeping robot.
Background
With the rapid development of science and technology, more and more science and technology products enter our daily life. Nowadays, more and more families have the robot of sweeping the floor to clean rubbish, reduce people's work load, practice thrift the time of sweeping the floor, and it is very convenient. However, it is not negligible that the sweeping robot has some disadvantages. The accuracy of recognizing the garbage by the sweeping robot is still needed to be improved, and particularly, the defect of the sweeping robot is particularly obvious for the complex and patterned ground, and the situation often occurs, namely the sweeping robot continuously cleans the ground in one place, so that the invention can accurately recognize the complex ground environment and is necessary for intelligently cleaning the ground.
Disclosure of Invention
The invention aims to provide an intelligent sweeping method of a sweeping robot, which is used for solving the problems that the sweeping robot has low recognition rate on complex ground environments and cannot efficiently clear ground garbage, realizing accurate recognition on the complex ground environments and improving the garbage sweeping efficiency of the sweeping robot.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for intelligently sweeping by a sweeping robot comprises the following steps:
step 1, scanning and storing ground images: scanning a ground image, extracting image characteristics after a series of pretreatment on the shot image, integrating the extracted texture characteristics into a characteristic vector and storing the characteristic vector in a memory of the sweeping robot;
step 2, starting a cleaning mode and shooting: starting a cleaning mode, and regularly shooting the ground by a front camera;
step 3, preprocessing the image, and marking a suspicious region;
step 4, cleaning the suspicious region, and counting N to 1;
step 5, photographing the suspicious region which is just cleaned, and detecting whether the suspicious region is cleaned;
step 6, detecting whether the ground is available or not;
and 7, judging whether the clearing frequency exceeds a threshold value, if not, continuing clearing, and if the clearing frequency exceeds the threshold value or is equal to the threshold value, sending warning information, and then performing the step 2.
Further, the step 1 specifically includes:
s1.1, collecting an image: shooting a complex ground by using a front camera of the sweeping robot, wherein the ground is cleaned;
step S1.2, image preprocessing: graying and histogram equalization;
the graying is expressed as follows by adopting a weighted average method:
f(x,y)=0.299R+0.587G+0.114B (1)
in formula (1), R, G, B represents three components of an RGB color image, respectively, and f (x, y) represents a grayscale image;
histogram equalization is specifically as follows:
the gray level histogram of an image is a one-dimensional discrete function that can be written as:
h(x)=nk,k=0,1,…,L-1 (2)
in the formula (2), nkIs the number of pixels with a gray level k in the image f (x, y), L representing the total number of gray levels;
on the basis of the histogram, the normalized histogram is further defined as the relative frequency of occurrence of gray levels, expressed as follows:
Pk(k)=nk/N (3)
in formula (3), N represents the total number of pixels of the image f (x, y), and NkIs the number of pixels with a gray level k in the image f (x, y);
the transformation function for histogram equalization processing of an image is expressed as follows:
Figure BDA0003523746210000031
in the formula (4), Pk(k) Representing the relative frequency of occurrence of a grey level k, nkN represents the total number of pixels of the image f (x, y);
s1.3, extracting texture features: the method for extracting the roughness, the contrast and the direction degree by adopting a Tamura texture feature extraction method comprises the following specific steps:
the roughness calculation process is as follows:
first, calculate 2k×2kThe average intensity value of the pixels of the active window of pixels is formulated as follows:
Figure BDA0003523746210000032
in formula (5), k is 0,1, …,5, and g (i, j) represents the pixel intensity value at (i, j);
then, the average intensity difference between windows of each pixel which do not overlap with each other in the horizontal and vertical directions is calculated, and the formula is as follows:
Ek,h(x,y)=|Ak(x+2k-1,y)-Ak(x-2k-1,y)| (6)
Ek,v(x,y)=|Ak(x,y+2k-1)-Ak(x,y-2k-1)| (7)
in equations (6) and (7), the k value that maximizes the E value is used to set the optimum size, as follows:
Sbest(x,y)=2k (8)
Ek=Emax=max(E1,h,E1,v,…,E5,h,E5,v) (9)
finally, S in the whole image is calculatedbestThe roughness is obtained from the average of the following formula:
Figure BDA0003523746210000033
in the formula (10), m × n represents the size of the image;
the contrast calculation procedure is as follows:
the contrast is obtained by counting the intensity distribution of the pixels, specifically by
Figure BDA0003523746210000041
Is defined in which mu4Fourth order central moment, σ, representing the image gray scale2Is the variance of the image gray scale, the contrast is expressed as follows:
Figure BDA0003523746210000042
the calculation steps of the direction degree are as follows:
first, a gradient vector at each pixel is calculated, whose modulus and direction are respectively defined as:
die:
Figure BDA0003523746210000043
argument:
Figure BDA0003523746210000044
in equations (12) and (13), ΔHAnd ΔVAre the horizontal and vertical differences resulting from convolving the target image with two templates of 3X 3:
Figure BDA0003523746210000045
when the gradient vectors of all pixels are calculated, a histogram HDConstructed to express the value of θ, the local edge probability histogram of the azimuth is expressed as follows:
Figure BDA0003523746210000047
in formula (14), Nθ(k) Indicating that when deltag is equal to or greater than t,
Figure BDA0003523746210000049
the number of pixels;
finally, the direction of the image population can be obtained by calculating the sharpness of the peaks in the histogram, which is expressed as follows:
Figure BDA00035237462100000410
in the formula (15), o represents HDPeak value of (1), npRepresents HDNumber of all peaks in, wpRepresenting all the bin numbers contained by the peak p,
Figure BDA0003523746210000051
representing the handle with the highest value;
and S1.4, integrating the extracted texture features into a feature vector, and storing the feature vector in a memory of the sweeping robot.
Further, in the step 2, the cleaning mode refers to a process of cleaning the ground garbage by the cleaning robot, and the front-facing camera regularly shoots the ground so as to collect information of the ground at the time, so as to detect whether the ground is cleaned up or not and prepare for work such as cleaning the ground or not.
Further, in step 3, the preprocessing includes: graying, histogram equalization and image segmentation, wherein the graying and the histogram equalization are the same as the step S1.2, and the image is segmented by adopting a threshold-based segmentation method, which is expressed as follows:
Figure BDA0003523746210000052
in formula (16), f (x, y) represents an image gray value, T represents a threshold value, and T can be determined by a bimodal method;
marking the white area as a suspicious area;
further, in the step 5, detecting whether the cleaning is cleared includes: the detection means that the shot image is preprocessed to see whether a suspicious region exists or not, and if so, the step 6 is carried out; if not, indicating that the clearing is finished, continuing working, and jumping to the step 2;
further, in step 6, the step of detecting whether the ground is detected includes the following steps:
s6.1, preprocessing the shot image;
in the same step 1, performing a series of preprocessing such as graying, histogram equalization and the like on the shot image;
s6.2, extracting texture features of the image;
extracting Tamura textural features of the image in the same step 1;
s6.3, calculating the similarity of the images;
using the Mahalanobis distance, the following is expressed:
Figure BDA0003523746210000053
in the formula (17), X, Y represents an N-dimensional feature vector, C-1An inverse covariance matrix representing C;
if the similarity between the shot image and a certain image is larger than the threshold value, the suspicious region is the ground, the work is continued after the clearing is finished, the step 2 is skipped, and if the similarity between the shot image and any image is not larger than the threshold value, the suspicious region is not the ground, and the clearing is continued.
Further, in step 7, the threshold is the maximum number of times that the floor cleaning is performed in the same place, and the warning message means that the floor cleaning robot sends the image of the suspicious area to the remote terminal to notify the owner that the suspicious area is difficult to clean, and the remote terminal is a mobile phone terminal or a PC terminal.
The invention has the beneficial effects that:
1. the method comprises the steps of scanning a ground image for the image, extracting texture features after a series of preprocessing is carried out on the image, storing the texture features in a sweeping robot, and judging whether the ground is the suspicious region or not by calculating the similarity between the image of the suspicious region and the stored ground image.
2. By setting a threshold value, the maximum cleaning times of the same place are limited, and the floor sweeping robot is prevented from cleaning the same place.
3. By sending the warning information, all people of the sweeping robot can know the sweeping condition, and garbage which is difficult to treat can be conveniently swept.
Drawings
Fig. 1 is a flowchart of a method for intelligently sweeping by a sweeping robot according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment provides a method for intelligently sweeping by a sweeping robot, which comprises the following steps:
step 1, scanning and storing ground pictures: scanning a ground picture, obtaining a ground recognition model through training, and storing the ground recognition model in a memory of the sweeping robot;
utilize the leading camera of robot of sweeping the floor, shoot complicated ground, the ground this moment has been by the clean up, and through a series of preliminary treatment backs to the image of shooing, extract the image characteristic, integrate into the eigenvector with the textural feature who extracts, save in the memory of robot of sweeping the floor, specifically as follows:
step S1, collecting images;
shooting a complex ground by using a front camera of the sweeping robot, wherein the ground is cleaned;
step S2, preprocessing the image;
the shot image is processed by a series of preprocessing, the preprocessing comprises: graying and histogram equalization;
graying is represented by a weighted average method as follows:
f(x,y)=0.299R+0.587G+0.114B (1)
in formula (1), R, G, B represents three components of an RGB color image, respectively, and f (x, y) represents a grayscale image;
histogram equalization is a process of converting one image into another image with an equalized histogram by gray scale conversion, i.e. the same number of pixels are provided at each gray scale, and the specific process is as follows:
the gray level histogram of an image is a one-dimensional discrete function that can be written as:
h(x)=nk,k=0,1,…,L-1 (2)
in the formula (2), nkIs the number of pixels with a gray level k in the image f (x, y), L representing the total number of gray levels;
on the basis of the histogram, the normalized histogram is further defined as the relative frequency of occurrence of gray levels, expressed as follows:
Pk(k)=nk/N (3)
in formula (3), N represents the total number of pixels of the image f (x, y), and NkIs the number of pixels with a gray level k in the image f (x, y);
the transformation function for histogram equalization processing of an image is expressed as follows:
Figure BDA0003523746210000081
in the formula (4), Pk(k) Representing the relative frequency of occurrence of a grey level k, nkN represents the total number of pixels of the image f (x, y);
step S3, extracting texture features;
extracting the texture features by adopting a Tamura texture feature extraction method, wherein the Tamura texture features comprise 6 components which are respectively as follows: roughness, contrast, direction degree, linearity, regularity and roughness, wherein 3 components are adopted as the texture features of the image, and the method specifically comprises the following steps:
roughness characterizes the roughness of an image, and roughness can be divided into the following steps:
first, calculate 2k×2kThe average intensity value of the pixels of the active window of pixels, the formula is as follows:
Figure BDA0003523746210000082
in the formula, k is 0,1, …,5, and g (i, j) represents the pixel intensity value at (i, j);
then, the average intensity difference between windows of each pixel which do not overlap with each other in the horizontal and vertical directions is calculated, and the formula is as follows:
Ek,h(x,y)=|Ak(x+2k-1,y)-Ak(x-2k-1,y)| (6)
Ek,v(x,y)=|Ak(x,y+2k-1)-Ak(x,y-2k-1)| (7)
in equations (6) and (7), the k value that maximizes the E value is used to set the optimum size, as follows:
Sbest(x,y)=2k (8)
Ek=Emax=max(E1,h,E1,v,…,E5,h,E5,v) (9)
finally, S in the whole image is calculatedbestThe roughness is obtained from the average of the following formula:
Figure BDA0003523746210000091
in the formula (10), m × n represents the size of the image;
the contrast is obtained by counting the intensity distribution of the pixels, specifically by
Figure BDA0003523746210000092
Is defined in which mu4Fourth order central moment, σ, representing image gray scale2Is the variance of the image gray scale, the contrast is expressed as follows:
Figure BDA0003523746210000093
the calculation steps of the direction degree are as follows:
first, a gradient vector at each pixel is calculated, whose modulus and direction are respectively defined as:
die:
Figure BDA0003523746210000094
argument:
Figure BDA0003523746210000095
in equations (12) and (13), ΔHAnd ΔVAre the horizontal and vertical differences resulting from convolving the target image with two templates of 3X 3:
Figure BDA0003523746210000096
when the gradient vectors of all pixels are calculated, a histogram HDConstructed to express the value of θ, the local edge probability histogram of the azimuth is expressed as follows:
Figure BDA0003523746210000098
in formula (14),Nθ(k) Meaning that when deltag deltat,
Figure BDA00035237462100000910
the number of pixels;
finally, the direction of the image population can be obtained by calculating the sharpness of the peaks in the histogram, which is expressed as follows:
Figure BDA0003523746210000101
in the formula (15), p represents HDPeak value of (1), npRepresents HDNumber of all peaks in, wpRepresenting all the bin numbers contained by the peak p,
Figure BDA0003523746210000102
representing the handle with the highest value;
step S4, integrating the extracted texture features into feature vectors, and storing the feature vectors in a memory of the sweeping robot;
step 2, starting a cleaning mode and shooting: starting a cleaning mode, and regularly shooting the ground by a front camera;
the cleaning mode refers to the process that the floor sweeping robot cleans the ground garbage, and the front-facing camera regularly shoots the ground so as to collect the information of the ground at the moment and be used for detecting whether the ground is cleaned up or not and preparing for the work such as the ground and the like.
Step 3, preprocessing an image;
the image preprocessing is to identify and detect images in the follow-up process, and if the images are not preprocessed and are affected by sunlight, light and the like, the images can be greatly changed, so that the follow-up work is not facilitated.
The image preprocessing method comprises the following steps: graying, histogram equalization, and image segmentation
The graying is to convert a shot color image into a grayscale image, and the graying method comprises the following steps: the invention discloses an average value method, a gray scale method, a maximum value method, a weighted average method and the like, wherein the weighted average method is adopted to graye an image and is expressed as follows:
f(x,y)=0.299R+0.587G+0.114B (16)
in formula (16), R, G, B represents three components of an RGB color image, respectively, and f (x, y) represents a grayscale image;
histogram equalization is a method for adjusting contrast using an image histogram in the field of image processing. This method is often used to increase the global contrast of many images, especially when the contrast of the useful data of the images is relatively close. In this way, the luminance can be better distributed over the histogram. This can be used to enhance local contrast without affecting overall contrast, and histogram equalization accomplishes this by effectively extending the commonly used luminance.
Conventional image segmentation methods are mainly classified into a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like. The invention adopts a segmentation method based on a threshold value to segment an image, and the method is represented as follows:
Figure BDA0003523746210000111
in formula (17), f (x, y) represents an image gray value, T represents a threshold value, and T can be determined by a double peak method;
step 4, marking out suspicious regions;
after image preprocessing, some areas can be found to be obviously different and marked as suspicious areas;
step 5, cleaning the suspicious region;
cleaning the suspicious region marked in the step 4, and counting that N is 1, which indicates that the suspicious region has been cleaned once;
step 6, photographing the suspicious region which is just cleaned, and detecting whether the suspicious region is cleaned;
the detection means that the shot image is preprocessed to see whether a suspicious region exists or not, and if so, the step 7 is carried out; if not, indicating that the clearing is finished, continuing working, and jumping to the step 2;
step 7, detecting whether the ground is available or not;
the detection of whether the ground is detected comprises the following steps:
s7.1, preprocessing the shot image;
in the same step 1, performing a series of preprocessing such as graying, histogram equalization and the like on the shot image;
s7.2, extracting texture features of the image;
extracting Tamura textural features of the image in the same step 1;
s7.3, calculating the similarity of the images;
calculating similarity between the shot image and a ground image in a memory of the sweeping robot, if the similarity between the shot image and a certain image is greater than a threshold value, indicating that the suspicious region is the ground, and continuing to work after the removal is finished, and skipping to the step 2, if the similarity between any one of the shot images is greater than the threshold value, indicating that the suspicious region is not the ground, and continuing to remove;
the method for calculating the similarity of the images, namely determining the distance between the image to be detected and the characteristic vector of the target image, comprises the following steps: the Manhattan distance, the Euclidean distance, the Minkowsky distance, the Mahalanobis distance and the cosine distance are adopted in the invention, and the Mahalanobis distance is expressed as follows:
Figure BDA0003523746210000121
in the formula (18), X, Y represents an N-dimensional feature vector, C-1An inverse covariance matrix representing C;
and 8, judging whether the clearing frequency exceeds a threshold value, if not, continuing clearing, and if the clearing frequency exceeds the threshold value or is equal to the threshold value, sending warning information, and then performing the step 2.
The threshold value is to limit the sweeping robot to repeatedly sweep the same place, and in this embodiment, the threshold value T takes a value of 3, which means that the sweeping robot can sweep the same place 3 times at most.
The warning information means that the sweeping robot sends the shot suspicious region image to a remote terminal to inform an owner that the suspicious region is difficult to clean, and the remote terminal is a mobile phone terminal or a PC terminal.
Thus, the flow of the whole method is completed.
By combining with specific implementation, the invention has the advantages that the image is scanned to obtain the ground image, the texture features are extracted after a series of preprocessing is carried out on the image, the texture features are stored in the sweeping robot, and the similarity between the image of the suspicious region and the stored ground image is calculated, so that whether the image is the ground or not is judged; the maximum cleaning times of the same place are limited by setting a threshold value, so that the sweeping robot is prevented from cleaning in the same place; by sending the warning information, all people of the sweeping robot can know the sweeping condition, and garbage which is difficult to treat can be conveniently swept.
The invention is not described in detail, but is well known to those skilled in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. The intelligent sweeping method of the sweeping robot is characterized by comprising the following steps of:
step 1, scanning and storing ground images: scanning a ground image, extracting image characteristics after a series of preprocessing is carried out on the shot image, integrating the extracted texture characteristics into a characteristic vector and storing the characteristic vector in a memory of the sweeping robot;
step 2, starting a cleaning mode and shooting: starting a cleaning mode, and regularly shooting the ground by a front camera;
step 3, preprocessing the image, and marking a suspicious region;
step 4, cleaning and counting the suspicious regions;
step 5, photographing the suspicious region which is just cleaned, and detecting whether the suspicious region is cleaned;
step 6, detecting whether the ground is available or not;
and 7, judging whether the clearing frequency exceeds a threshold value, if not, continuing clearing, and if the clearing frequency exceeds the threshold value or is equal to the threshold value, sending warning information, and then performing the step 2.
2. The method for intelligent sweeping by the sweeping robot according to claim 1, wherein the step 1 specifically comprises:
s1.1, collecting an image: shooting a complex ground by using a front camera of the sweeping robot, wherein the ground is cleaned;
step S1.2, image preprocessing: graying and histogram equalization;
the graying is expressed as follows by adopting a weighted average method:
f(x,y)=0.299R+0.587G+0.114B (1)
in formula (1), R, G, B represents three components of an RGB color image, respectively, and f (x, y) represents a grayscale image;
histogram equalization is specifically as follows:
the gray level histogram of an image is a one-dimensional discrete function that can be written as:
h(x)=nk,k=0,1,...,L-1 (2)
in the formula (2), nkIs the number of pixels with a gray level k in the image f (x, y), L representing the total number of gray levels;
on the basis of the histogram, the normalized histogram is further defined as the relative frequency of occurrence of gray levels, expressed as follows:
Pk(k)=nk/N (3)
in formula (3), N represents the total number of pixels of the image f (x, y), and NkIs the image f (x,y) the number of pixels having a gray level of k;
the transformation function for histogram equalization processing of an image is expressed as follows:
Figure FDA0003523746200000021
in the formula (4), Pk(k) Representing the relative frequency of occurrence of a grey level k, nkN represents the total number of pixels of the image f (x, y);
s1.3, extracting texture features: the method for extracting the roughness, the contrast and the direction degree by adopting a Tamura texture feature extraction method comprises the following specific steps:
the roughness calculation process is as follows:
first, calculate 2k×2kThe average intensity value of the pixels of the active window of pixels, the formula is as follows:
Figure FDA0003523746200000022
in formula (5), k is 0, 1.., 5, g (i, j) represents the pixel intensity value at (i, j);
then, the average intensity difference between windows of each pixel which do not overlap with each other in the horizontal and vertical directions is calculated, and the formula is as follows:
Ek,h(x,y)=|Ak(x+2k-1,y)-Ak(x-2k-1,y)| (6)
Ek,v(x,y)=|Ak(x,y+2k-1)-Ak(x,y-2k-1)| (7)
in equations (6) and (7), the k value that maximizes the E value is used to set the optimum size, as follows:
Sbest(x,y)=2k (8)
Ek=Emax=max(E1,h,E1,v,...,E5,h,E5,v) (9)
finally, S in the whole image is calculatedbestThe roughness is obtained from the average of the following formula:
Figure FDA0003523746200000031
in the formula (10), m × n represents the size of the image;
the contrast calculation procedure is as follows:
the contrast is obtained by counting the intensity distribution of the pixels, specifically by
Figure FDA0003523746200000032
Is defined in which mu4Fourth order central moment, σ, representing the image gray scale2Is the variance of the image gray scale, the contrast is expressed as follows:
Figure FDA0003523746200000033
the calculation steps of the direction degree are as follows:
first, a gradient vector at each pixel is calculated, whose modulus and direction are respectively defined as:
die:
Figure FDA0003523746200000034
argument:
Figure FDA0003523746200000035
in equations (12) and (13), ΔHAnd ΔVAre the horizontal and vertical differences resulting from convolving the target image with two templates of 3X 3:
Figure FDA0003523746200000036
when the gradient vectors of all pixels are calculated, a histogram HDConstructed to express the value of θ, the local edge probability histogram of the azimuth is expressed as follows:
Figure FDA0003523746200000037
in formula (14), Nθ(k) Meaning that when deltag deltat,
Figure FDA0003523746200000041
the number of pixels; the histogram firstly discretizes a value range, and then counts the number of pixels of which corresponding deltaG is greater than a given threshold value in each handle; this histogram will appear as a peak for images with significant directionality and as a relatively flat for images without significant directionality.
Finally, the direction of the image population can be obtained by calculating the sharpness of the peaks in the histogram, which is expressed as follows:
Figure FDA0003523746200000042
in the formula (15), p represents HDPeak value of (1), npRepresents HDNumber of all peaks in, wpRepresenting all the bin numbers contained by the peak p,
Figure FDA0003523746200000043
representing the handle with the highest value;
and S1.4, integrating the extracted texture features into feature vectors, and storing the feature vectors in a memory of the sweeping robot.
3. The method as claimed in claim 2, wherein in the step 2, the cleaning mode refers to a process of cleaning garbage on the floor by the cleaning robot, and the front-facing camera periodically shoots the floor to collect information of the floor at that time, so as to detect whether the floor is clean or not and prepare for the work of cleaning the floor or not.
4. The method for intelligent sweeping of the sweeping robot according to claim 3, wherein in the step 3, the pre-processing comprises: graying, histogram equalization and image segmentation, wherein the graying and the histogram equalization are the same as the step S1.2, and the image is segmented by adopting a threshold-based segmentation method, which is expressed as follows:
Figure FDA0003523746200000044
in formula (16), f (x, y) represents an image gray value, T represents a threshold value, and T can be determined by a bimodal method;
and marking the white area as a suspicious area.
5. The method for intelligent sweeping of the sweeping robot according to claim 4, wherein the step 5 of detecting whether the cleaning is completed comprises: the detection means that the shot image is preprocessed to see whether a suspicious region exists or not, and if so, the step 6 is carried out; if not, the clearing is finished, the operation is continued, and the step 2 is skipped.
6. The method for intelligent sweeping of the sweeping robot according to claim 5, wherein the step 6 of detecting whether the floor is ground comprises the following steps:
s6.1, preprocessing the shot image;
in the same step 1, performing a series of preprocessing such as graying, histogram equalization and the like on the shot image;
s6.2, extracting texture features of the image;
extracting Tamura textural features of the image in the same step 1;
s6.3, calculating the similarity of the images;
using the Mahalanobis distance, the following is expressed:
Figure FDA0003523746200000051
in the formula (17), X, Y represents an N-dimensional feature vector, C-1An inverse covariance matrix representing C;
if the similarity between the shot image and a certain image is larger than the threshold value, the suspicious region is the ground, the work is continued after the clearing is finished, the step 2 is skipped, and if the similarity between the shot image and any image is not larger than the threshold value, the suspicious region is not the ground, and the clearing is continued.
7. The method for intelligent cleaning by the cleaning robot according to claim 6, wherein in the step 7, the threshold value is the maximum number of times of cleaning in the same place, and the warning message means that the cleaning robot sends the captured suspicious region image to the remote terminal to notify the owner that the suspicious region is difficult to clean, and the remote terminal is a mobile phone terminal or a PC terminal.
CN202210188705.5A 2022-02-28 2022-02-28 Intelligent sweeping method of sweeping robot Pending CN114569007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210188705.5A CN114569007A (en) 2022-02-28 2022-02-28 Intelligent sweeping method of sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210188705.5A CN114569007A (en) 2022-02-28 2022-02-28 Intelligent sweeping method of sweeping robot

Publications (1)

Publication Number Publication Date
CN114569007A true CN114569007A (en) 2022-06-03

Family

ID=81771915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210188705.5A Pending CN114569007A (en) 2022-02-28 2022-02-28 Intelligent sweeping method of sweeping robot

Country Status (1)

Country Link
CN (1) CN114569007A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256421A (en) * 2017-12-05 2018-07-06 盈盛资讯科技有限公司 A kind of dynamic gesture sequence real-time identification method, system and device
CN109003262A (en) * 2018-06-29 2018-12-14 炬大科技有限公司 Stain clean method and device
CN111079596A (en) * 2019-12-05 2020-04-28 国家海洋环境监测中心 System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN111493753A (en) * 2020-04-25 2020-08-07 王晨庄 Floor sweeping robot and method capable of cleaning floor based on floor cleanliness degree
CN111733743A (en) * 2020-06-17 2020-10-02 广州赛特智能科技有限公司 Automatic cleaning method and cleaning system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256421A (en) * 2017-12-05 2018-07-06 盈盛资讯科技有限公司 A kind of dynamic gesture sequence real-time identification method, system and device
CN109003262A (en) * 2018-06-29 2018-12-14 炬大科技有限公司 Stain clean method and device
CN111079596A (en) * 2019-12-05 2020-04-28 国家海洋环境监测中心 System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN111493753A (en) * 2020-04-25 2020-08-07 王晨庄 Floor sweeping robot and method capable of cleaning floor based on floor cleanliness degree
CN111733743A (en) * 2020-06-17 2020-10-02 广州赛特智能科技有限公司 Automatic cleaning method and cleaning system

Similar Documents

Publication Publication Date Title
EP1374168B1 (en) Method and apparatus for determining regions of interest in images and for image transmission
CN108416355B (en) Industrial field production data acquisition method based on machine vision
CN107194317B (en) Violent behavior detection method based on grid clustering analysis
Valizadeh et al. Binarization of degraded document image based on feature space partitioning and classification
CN106373146B (en) A kind of method for tracking target based on fuzzy learning
KR101906796B1 (en) Device and method for image analyzing based on deep learning
CN106682665B (en) Seven-segment type digital display instrument number identification method based on computer vision
US7058220B2 (en) Method and system for processing images using histograms
EP1387316A2 (en) Calculating noise form multiple digital images having a common noise source
CN114022823A (en) Shielding-driven pedestrian re-identification method and system and storable medium
CN111652033A (en) Lane line detection method based on OpenCV
CN111046782B (en) Quick fruit identification method for apple picking robot
CN117058232A (en) Position detection method for fish target individuals in cultured fish shoal by improving YOLOv8 model
CN117036352B (en) Video analysis method and system based on artificial intelligence
CN110647813A (en) Human face real-time detection and identification method based on unmanned aerial vehicle aerial photography
JP2009123234A (en) Object identification method, apparatus and program
JP4285640B2 (en) Object identification method, apparatus and program
CN114569007A (en) Intelligent sweeping method of sweeping robot
CN107194385A (en) A kind of intelligent vehicle license plate recognition system
CN112532938B (en) Video monitoring system based on big data technology
CN112395990B (en) Method, device, equipment and storage medium for detecting weak and small targets of multi-frame infrared images
CN114549649A (en) Feature matching-based rapid identification method for scanned map point symbols
CN114255203B (en) Fry quantity estimation method and system
CN114913438A (en) Yolov5 garden abnormal target identification method based on anchor frame optimal clustering
CN109034125B (en) Pedestrian detection method and system based on scene complexity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220603

RJ01 Rejection of invention patent application after publication