CN111374608A - Dirt detection method, device, equipment and medium for lens of sweeping robot - Google Patents

Dirt detection method, device, equipment and medium for lens of sweeping robot Download PDF

Info

Publication number
CN111374608A
CN111374608A CN201811637552.8A CN201811637552A CN111374608A CN 111374608 A CN111374608 A CN 111374608A CN 201811637552 A CN201811637552 A CN 201811637552A CN 111374608 A CN111374608 A CN 111374608A
Authority
CN
China
Prior art keywords
dirt
image
lens
types
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811637552.8A
Other languages
Chinese (zh)
Other versions
CN111374608B (en
Inventor
王旭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharkninja China Technology Co Ltd
Original Assignee
Sharkninja China Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharkninja China Technology Co Ltd filed Critical Sharkninja China Technology Co Ltd
Priority to CN201811637552.8A priority Critical patent/CN111374608B/en
Publication of CN111374608A publication Critical patent/CN111374608A/en
Application granted granted Critical
Publication of CN111374608B publication Critical patent/CN111374608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a dirt detection method, a dirt detection device, equipment and a medium for a lens of a sweeping robot. The method at least comprises the following steps: acquiring one or more images acquired by a lens of the sweeping robot; according to predefining for frames of a plurality of types of dirt, respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt; and providing corresponding dirt prompt information to the user according to the target detection result. The sweeping robot based on the visual analysis can effectively and automatically detect dirt on the lens and timely prompt a user to clean, and therefore more accurate navigation is facilitated.

Description

Dirt detection method, device, equipment and medium for lens of sweeping robot
Technical Field
The application relates to the field of intelligent household appliances, in particular to a dirt detection method, a dirt detection device, dirt detection equipment and dirt detection media for a lens of a sweeping robot.
Background
The floor sweeping robot is an intelligent household appliance which appears and is popular in recent years.
The sweeping robot can automatically navigate based on radar detection or visual analysis, so that people can clean the ground in a orderly manner. For the sweeping robot based on visual analysis and navigation, the body of the sweeping robot is provided with at least one camera which is used for collecting images of the surrounding environment when the sweeping robot works, analyzing the images and navigating according to the analysis result.
However, the sweeping robot is located at a lower position of the ground for a long time, and the camera lens of the sweeping robot is prone to being attached with dirt such as hair, dust, oil stain and the like, so as to pollute the lens.
Disclosure of Invention
The embodiment of the application provides a dirt detection method, a device, equipment and a medium for a lens of a sweeping robot, and aims to solve the following technical problems in the prior art: since the sweeping robot does not perceive the dirt on the lens, the accuracy of the navigation based on the visual analysis is adversely affected.
The embodiment of the application adopts the following technical scheme:
a dirt detection method for a lens of a sweeping robot comprises the following steps:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
Optionally, before performing the target detection in the image, the method further includes:
segmenting the foreground and the background of the image by using an image segmentation algorithm;
the target detection in the image specifically includes:
and carrying out target detection in the foreground of the image obtained by segmentation.
Optionally, before performing target detection in the image by using a bounding box regression model based on a convolutional neural network, the method further includes:
acquiring a plurality of training image samples of a plurality of types of pollutants respectively, and setting differentiated training weights for at least two types of the plurality of types;
and training the frame regression model according to the differentiated training weights and the plurality of training image samples.
Optionally, the setting of differentiated training weights for at least two of the multiple types specifically includes:
and setting differentiated training weights for the at least two types according to the difference of the significance degrees of the pollutants in the training image samples respectively corresponding to the at least two types in the plurality of types, and training the types to which the pollutants with the relatively poor significance degrees belong by priority bias.
Optionally, the at least two types include a significant stain type, an insignificant stain type, the training weight set for the insignificant stain type being greater than the training weight set for the significant stain type.
Optionally, after providing the corresponding contamination prompting information to the user, the method further comprises:
and judging whether the user cleans the detected dirt within a set time, and if not, correcting an image area where the dirt is located on an image acquired after the lens of the sweeping robot, and then using the image area for navigation of the sweeping robot.
Optionally, the soil comprises at least one of: dust, hair, greasy dirt.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the sweeping robot based on the visual analysis navigation can effectively and automatically detect dirt on the lens and timely prompt a user to clean, so that the robot is helpful for more accurate navigation.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of a method for detecting contamination on a lens of a sweeping robot according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a frame identified and a corresponding correctly labeled frame during a training process of the frame regression model of FIG. 1 according to some embodiments of the present application;
fig. 3 is a schematic flowchart of an implementation of the method for detecting contaminants on a lens of a sweeping robot in fig. 1 in an actual application scenario according to some embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a dirt detection device for a lens of a sweeping robot according to some embodiments of the present disclosure, which corresponds to fig. 1;
fig. 5 is a schematic structural diagram of a dirt detection apparatus for a lens of a sweeping robot according to some embodiments of the present disclosure and corresponding to fig. 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The scheme of this application is based on the machine learning algorithm that the frame regresses, and the filth on the robot lens of sweeping the floor is automated inspection to can distinguish the filth type, can carry out the algorithm processing that has the prejudice to the filth of different grade type, in order to obtain comparatively accurate testing result, thereby help guiding the robot navigation of sweeping the floor more accurately. The scheme of the present application is explained in detail below.
Fig. 1 is a flowchart illustrating a method for detecting contaminants on a lens of a sweeping robot according to some embodiments of the present disclosure. In this flow, from the perspective of a device, the execution subject may include a sweeping robot, or another computing device having a communication connection with the sweeping robot, such as a smartphone, a single machine learning server based on a convolutional neural network, a machine learning server cluster, and the like. From a program perspective, the execution subject accordingly may include programs loaded on these computing devices, such as a convolutional neural network-based neural network modeling platform, an image processing platform, and so on.
The process in fig. 1 may include the following steps:
s102: acquiring one or more images acquired by the lens of the sweeping robot.
In some embodiments of the present application, there may be one or more lenses on the sweeping robot, and the position layout of the lenses is not specifically limited herein. For example, the lens may be located at the center of the top of the sweeping robot, or may be located around the side of the sweeping robot.
In some embodiments of the present application, step S102 may be acquired in real time during the course of the sweeping robot navigating in motion, or may be acquired specifically in a stationary state. If the lens is attached with dirt, the existence of the dirt can be reflected to a certain extent on the collected images, the images can be shot by adopting a self-adaptive focal length, and if the images are shot by adopting a macro, the dirt attached to the lens can be more clearly shown on the shot images.
S104: and according to predefinition of frames of a plurality of types of dirt respectively, carrying out target detection in the image by using a frame regression model based on a convolutional neural network so as to identify the frames of the dirt.
In some embodiments of the present application, contaminants that may adhere to the lens may be classified in advance for targeted detection. Such soils include, for example: dust, hair (of human or pet), fingerprints, oil stains, water droplets, clothing fibers, paper scraps, and the like. The type of the contaminants may be various, for example, each of the contaminants listed above may be classified into one type, or some of the contaminants may be classified into a larger type according to the commonality. In the latter case, for example, the stains having similar shapes may be classified into the same type, so as to obtain a stripe type (including hair, clothing fiber, etc.), a block type (including fingerprint, oil stain, water drop, etc.), etc., or the stains having similar influence on the lens may be classified into the same type, so as to obtain a significant type (e.g., hair, paper scrap, etc.), an insignificant type (e.g., dust, water drop, etc.), etc.
In some embodiments of the present application, the border regression is used to detect the contaminants, and then a suitable border needs to be defined for each type of contaminant, and the border is used to frame the corresponding contaminants. If a rectangular frame is used, the frame can be represented by using a four-dimensional vector (x, y, w, h), where x represents the abscissa of the center point of the frame, y represents the ordinate of the center point of the frame, w represents the frame width, and h represents the frame height. Of course, such vector representations are not unique, and dimensions may be added or the definition of existing dimensions may be changed. For example, if a rotatable rectangular frame is used, a dimension can be added to indicate the number of degrees of rotation of the whole frame around its center; for another example, if an elliptical frame is used, x and y may be replaced by coordinates of two focuses of the ellipse, and w and h may be replaced by lengths of a major axis and a minor axis of the ellipse.
In some embodiments of the present application, the constructed frame regression model needs to be used after performing supervised training by using a large number of image samples in advance, and in order to implement the supervised training, a frame with dirt needs to be correctly labeled in the image samples. During the labeling, the frame to be labeled may be further defined, for example, the labeling manner of w, h may be defined according to the specific dirt. For example, a hair may have a border that is sufficiently narrow (e.g., within 5 times the diameter of the hair), if the hair is relatively curly, a border that is sufficiently long to enclose a majority of the curly portion of the hair (e.g., at least 60% of the length of the entire hair), and so on.
In some embodiments of the present application, a bounding box regression model based on a convolutional neural network may be employed, taking into account that convolutional neural networks are good at processing images with regional refinement. The method comprises a multilayer structure, wherein each layer is provided with a plurality of neurons, each neuron utilizes a preset convolution kernel to execute convolution operation, extracts high-dimensional features in an input image and makes a decision so as to judge whether the input image has dirt (namely, a frame for identifying the dirt) and the specific position of the dirt. In the frame regression model training process, parameters in the frame regression model are corrected according to errors between the identified frame and the corresponding frame which is correctly marked, iteration is carried out until the model converges, and a good identification effect can be obtained (at least most of frames which are correctly marked can be accurately identified). The trained bounding box regression model may be used to perform target detection on the image in step S104.
More intuitively, some embodiments of the present application provide a schematic diagram of the identified bounding box and the corresponding correctly labeled bounding box in the training process of the bounding box regression model of fig. 1, as shown in fig. 2.
In fig. 2, the outermost solid boxes represent the borders of the image, the solid boxes in the image represent the bounding box currently recognized during the training process, and the dashed boxes represent the correctly labeled bounding boxes. Contaminants present in the image include: a drop of oil stain in the upper left corner and a piece of hair in the lower right corner. Obviously, there is still an error between the currently identified border of the dirt and the corresponding correctly labeled border, and the border regression model may be continuously trained, so that the error is further reduced until the expected effect is achieved.
S106: and providing corresponding dirt prompt information for a user according to the target detection result.
In some embodiments of the present application, the target detection result can reflect whether there is dirt on the lens, and more related information of the dirt, such as what type of dirt, the amount of dirt, and the like. Corresponding dirt indications can be generated accordingly.
The manner in which the corresponding soil prompting information is provided to the user may be varied. For example, the sweeping robot may play audio or electronic text and image information directly, or send the information to the mobile terminal of the user. The user responds to the dirt prompt message, can clear up the dirt to the robot of sweeping the floor better works.
By the method of fig. 1, the sweeping robot navigated based on the visual analysis can effectively and automatically detect dirt on the lens and prompt the user to clean up in time, thereby facilitating more accurate navigation.
Based on the method of fig. 1, some embodiments of the present application also provide some specific embodiments of the method, and further embodiments, which are explained below.
In some embodiments of the present application, for an image captured by a lens of a sweeping robot, the image includes at least two layers of contents with different depths of field, that is, contents on the lens and contents of a surrounding environment in a viewing area. Because the scheme aim at detect the filth on the camera lens, consequently, pay close attention to the content on the camera lens, when gathering the image, can adopt the macro to shoot, make the content on the camera lens more clear like this, be convenient for regard as the prospect with the content on the camera lens, the content that the surrounding environment falls in the region of finding a scene is as the background.
In order to prevent background interference and better perform target detection, an image segmentation algorithm can be used to segment the foreground and the background of an image to obtain the foreground, and then target detection is performed in the foreground to identify the frame of dirt.
In some embodiments of the present application, it is mentioned above that there are a plurality of different types of contaminants, and the different types of contaminants can be processed differently when training or using the frame regression model. The differentiated processing may include, for example: extracting image features with different types, training with different training weights, and the like.
For example, for step S104, using a bounding box regression model based on a convolutional neural network, before performing target detection in the image, the bounding box regression model may be trained as follows: acquiring a plurality of training image samples of a plurality of types of pollutants respectively, and setting differentiated training weights for at least two types of the plurality of types; and training the frame regression model according to the differentiated training weights and the plurality of training image samples. The set differentiated training weights may be fixed or dynamically variable over a plurality of different training sessions. The specific purpose of performing the differential training may be: so that relatively hard to recognize types of insults are more trained, the final training effects of different types of insults are relatively balanced, and the like.
More specifically, setting differentiated training weights for at least two of the plurality of types may include: according to the difference of the significance degrees of the pollutants in the training image samples respectively corresponding to at least two types of the multiple types, differentiated training weights are set for the at least two types, and training is carried out on the types of the pollutants with the relatively-poor significance degrees by priority bias. It should be noted that, if the set differentiated training weight is fixed, it is possible to perform training not only in the training stage but also in a batch manner, so that it is not necessarily necessary to preferentially train the type of the stain having a relatively poor degree of significance. For example, assuming that the at least two types include a significant stain type, an insignificant stain type, the fixed training weight set for the insignificant stain type is greater than the fixed training weight set for the significant stain type.
Of course, the training may be performed in a plurality of different training phases, and accordingly, a dynamically changing training weight may be set to achieve more flexible training. For example, assuming that the at least two types include a significant stain type and an insignificant stain type, the training weight set for the insignificant stain type is greater than the training weight set for the significant stain type in the first training phase, and the training weight set for the insignificant stain type is equal to or less than the training weight set for the significant stain type in the second training phase.
In some embodiments of the application, besides providing corresponding dirt prompt information to the user, the sweeping robot can also actively execute some measures for dealing with dirt, so as to reduce the work of the user and improve the experience of the user. For example, if the sweeping robot is provided with a device for cleaning the lens of the sweeping robot, such as an electric brush, the device can be actively started to clean dirt on the lens of the sweeping robot; for another example, the sweeping robot may determine whether the user has cleaned the detected dirt within a set time, and if not, the image area where the dirt is located on the image acquired after the lens of the sweeping robot is corrected (to eliminate the interference of the dirt as much as possible), and then the image area is used for the sweeping robot to navigate.
Based on the above description, some embodiments of the present application further provide a flowchart of a specific implementation of the method for detecting dirt on a lens of a sweeping robot in fig. 1 in an actual application scenario, as shown in fig. 3.
The flow in fig. 3 may include the following steps:
s302: and constructing a frame regression model based on the convolutional neural network.
S304: a large number of dirt image samples are obtained, and the borders of the dirt are correctly marked in the dirt image samples according to predefinitions of the borders of the plurality of types of dirt respectively.
S306: and setting differentiated training weights for the at least two types according to the difference of the significance degrees of the dirt in the dirt image samples respectively corresponding to the at least two types of the plurality of types of the dirt, and training a frame regression model according to the training weights.
S308: and after the frame regression model is trained, acquiring one or more images to be detected, which are acquired by a lens of the sweeping robot.
S310, segmenting the foreground and the background of the image to be detected by using an image segmentation algorithm to obtain the foreground.
S312: the border regression model performs target detection in the foreground to identify the border of the contaminant.
S314: and the sweeping robot provides corresponding dirt prompt information for the user according to the target detection result.
Based on the same idea, some embodiments of the present application further provide an apparatus, a device, and a non-volatile computer storage medium corresponding to the above method.
Fig. 4 is a schematic structural diagram of a dirt detection device for a lens of a sweeping robot according to some embodiments of the present application, corresponding to fig. 1, where a dashed box represents an optional module, and the device includes:
an obtaining module 401, configured to obtain one or more images collected by the lens of the sweeping robot;
an identification module 402, which performs target detection in the image by using a frame regression model based on a convolutional neural network according to predefinition of frames of a plurality of types of dirt respectively, so as to identify the frames of the dirt;
and the prompt module 403 provides corresponding dirt prompt information to the user according to the target detection result.
Optionally, before performing the target detection in the image, the identifying module 402 further performs:
segmenting the foreground and the background of the image by using an image segmentation algorithm;
the identification module 402 performs target detection in the image, specifically including:
the recognition module 402 performs target detection in the foreground of the segmented image.
Optionally, before performing target detection in the image by using a bounding box regression model based on a convolutional neural network, the apparatus further includes:
a training module 404, configured to obtain a plurality of training image samples of a plurality of types of contaminants, and set differentiated training weights for at least two of the plurality of types;
and training the frame regression model according to the differentiated training weights and the plurality of training image samples.
Optionally, the training module 404 sets differentiated training weights for at least two of the multiple types, which specifically includes:
the training module 404 sets differentiated training weights for at least two types according to the difference of the degree of significance of the contaminants in the training image samples respectively corresponding to the at least two types, so as to train the types to which the contaminants with relatively poor degree of significance belong with a priority bias.
Optionally, the at least two types include a significant stain type, an insignificant stain type, the training weight set for the insignificant stain type being greater than the training weight set for the significant stain type.
Optionally, the apparatus further comprises:
a modification module 405, after the prompt module 403 provides the corresponding contamination prompt information to the user, performing: and judging whether the user cleans the detected dirt within a set time, and if not, correcting an image area where the dirt is located on an image acquired after the lens of the sweeping robot, and then using the image area for navigation of the sweeping robot.
Optionally, the soil comprises at least one of: dust, hair, greasy dirt.
Fig. 5 is a schematic structural diagram of a dirt detection apparatus for a sweeping robot lens, corresponding to fig. 1, according to some embodiments of the present application, where the apparatus includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
Some embodiments of the present application provide a non-volatile computer storage medium for soil detection for a lens of a sweeping robot corresponding to fig. 1, storing computer-executable instructions configured to:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device and media embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
The apparatus, the device, the apparatus, and the medium provided in the embodiment of the present application correspond to the method one to one, and therefore, the apparatus, the device, and the medium also have beneficial technical effects similar to those of the corresponding method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A dirt detection method for a lens of a sweeping robot is characterized by comprising the following steps:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
2. The method of claim 1, wherein prior to performing object detection in the image, the method further comprises:
segmenting the foreground and the background of the image by using an image segmentation algorithm;
the target detection in the image specifically includes:
and carrying out target detection in the foreground of the image obtained by segmentation.
3. The method of claim 1, wherein prior to performing target detection in the image using a convolutional neural network-based bounding box regression model, the method further comprises:
acquiring a plurality of training image samples of a plurality of types of pollutants respectively, and setting differentiated training weights for at least two types of the plurality of types;
and training the frame regression model according to the differentiated training weights and the plurality of training image samples.
4. The method according to claim 3, wherein the setting of the differentiated training weights for at least two of the plurality of types specifically comprises:
and setting differentiated training weights for the at least two types according to the difference of the significance degrees of the pollutants in the training image samples respectively corresponding to the at least two types in the plurality of types, and training the types to which the pollutants with the relatively poor significance degrees belong by priority bias.
5. The method of claim 3 or 4, wherein the at least two types include a significant stain type, an insignificant stain type, and wherein the training weight set for the insignificant stain type is greater than the training weight set for the significant stain type.
6. The method of any of claims 1 to 4, wherein after providing the respective soil notification information to the user, the method further comprises:
and judging whether the user cleans the detected dirt within a set time, and if not, correcting an image area where the dirt is located on an image acquired after the lens of the sweeping robot, and then using the image area for navigation of the sweeping robot.
7. A method according to any of claims 1 to 4, wherein the soil comprises at least one of: dust, hair, greasy dirt.
8. The utility model provides a to dirt detection device of robot lens of sweeping floor which characterized in that includes:
the acquisition module acquires one or more images acquired by the lens of the sweeping robot;
the identification module is used for carrying out target detection in the image by utilizing a frame regression model based on a convolutional neural network according to predefinition of frames of a plurality of types of dirt respectively so as to identify the frames of the dirt;
and the prompting module is used for providing corresponding dirt prompting information for a user according to the target detection result.
9. The utility model provides a dirt check out test set to robot lens of sweeping floor which characterized in that includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
10. A non-transitory computer storage medium storing computer-executable instructions for contaminant detection of a lens of a sweeping robot, the computer-executable instructions configured to:
acquiring one or more images acquired by a lens of the sweeping robot;
according to predefining frames of a plurality of types of dirt respectively, utilizing a frame regression model based on a convolutional neural network to perform target detection in the image so as to identify the frames of the dirt;
and providing corresponding dirt prompt information for a user according to the target detection result.
CN201811637552.8A 2018-12-29 2018-12-29 Dirt detection method, device, equipment and medium for lens of sweeping robot Active CN111374608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811637552.8A CN111374608B (en) 2018-12-29 2018-12-29 Dirt detection method, device, equipment and medium for lens of sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811637552.8A CN111374608B (en) 2018-12-29 2018-12-29 Dirt detection method, device, equipment and medium for lens of sweeping robot

Publications (2)

Publication Number Publication Date
CN111374608A true CN111374608A (en) 2020-07-07
CN111374608B CN111374608B (en) 2021-08-03

Family

ID=71220553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811637552.8A Active CN111374608B (en) 2018-12-29 2018-12-29 Dirt detection method, device, equipment and medium for lens of sweeping robot

Country Status (1)

Country Link
CN (1) CN111374608B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040223A (en) * 2020-08-25 2020-12-04 RealMe重庆移动通信有限公司 Image processing method, terminal device and storage medium
CN114931337A (en) * 2022-01-23 2022-08-23 深圳银星智能集团股份有限公司 Cleaning method and dirt cleaning equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102162797A (en) * 2010-11-24 2011-08-24 哈尔滨工业大学(威海) Algorithm for detecting glass bottle neck damage and bottle bottom dirt
CN105424723A (en) * 2015-11-28 2016-03-23 惠州高视科技有限公司 Detecting method for defects of display screen module
CN106599925A (en) * 2016-12-19 2017-04-26 广东技术师范学院 Plant leaf identification system and method based on deep learning
CN107194409A (en) * 2016-03-15 2017-09-22 罗伯特·博世有限公司 Detect method, equipment and detection system, the grader machine learning method of pollution
CN107491720A (en) * 2017-04-01 2017-12-19 江苏移动信息系统集成有限公司 A kind of model recognizing method based on modified convolutional neural networks
CN107945158A (en) * 2017-11-15 2018-04-20 上海摩软通讯技术有限公司 A kind of dirty method and device of detector lens
CN108009497A (en) * 2017-11-30 2018-05-08 深圳中兴网信科技有限公司 Image recognition monitoring method, system, computing device and readable storage medium storing program for executing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102162797A (en) * 2010-11-24 2011-08-24 哈尔滨工业大学(威海) Algorithm for detecting glass bottle neck damage and bottle bottom dirt
CN105424723A (en) * 2015-11-28 2016-03-23 惠州高视科技有限公司 Detecting method for defects of display screen module
CN107194409A (en) * 2016-03-15 2017-09-22 罗伯特·博世有限公司 Detect method, equipment and detection system, the grader machine learning method of pollution
CN106599925A (en) * 2016-12-19 2017-04-26 广东技术师范学院 Plant leaf identification system and method based on deep learning
CN107491720A (en) * 2017-04-01 2017-12-19 江苏移动信息系统集成有限公司 A kind of model recognizing method based on modified convolutional neural networks
CN107945158A (en) * 2017-11-15 2018-04-20 上海摩软通讯技术有限公司 A kind of dirty method and device of detector lens
CN108009497A (en) * 2017-11-30 2018-05-08 深圳中兴网信科技有限公司 Image recognition monitoring method, system, computing device and readable storage medium storing program for executing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040223A (en) * 2020-08-25 2020-12-04 RealMe重庆移动通信有限公司 Image processing method, terminal device and storage medium
CN112040223B (en) * 2020-08-25 2022-08-12 RealMe重庆移动通信有限公司 Image processing method, terminal device and storage medium
CN114931337A (en) * 2022-01-23 2022-08-23 深圳银星智能集团股份有限公司 Cleaning method and dirt cleaning equipment

Also Published As

Publication number Publication date
CN111374608B (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN107403424B (en) Vehicle loss assessment method and device based on image and electronic equipment
CN109344717B (en) Multi-threshold dynamic statistical deep sea target online detection and identification method
CN101699469A (en) Method for automatically identifying action of writing on blackboard of teacher in class video recording
CN111374608B (en) Dirt detection method, device, equipment and medium for lens of sweeping robot
CN109509166B (en) Printed circuit board image detection method and device
CA3136674C (en) Methods and systems for crack detection using a fully convolutional network
CN109784259B (en) Intelligent water transparency identification method based on image identification and Samsung disk assembly
CN109984691A (en) A kind of sweeping robot control method
Grünauer et al. The power of GMMs: Unsupervised dirt spot detection for industrial floor cleaning robots
US11157765B2 (en) Method and system for determining physical characteristics of objects
CN110084825B (en) Image edge information navigation-based method and system
CN111144425A (en) Method and device for detecting screen shot picture, electronic equipment and storage medium
CN111374607A (en) Target identification method and device based on sweeping robot, equipment and medium
CN116805387B (en) Model training method, quality inspection method and related equipment based on knowledge distillation
CN114022804A (en) Leakage detection method, device and system and storage medium
CN111380873A (en) Dirt detection method, device, equipment and medium for lens of sweeping robot
CN104766330A (en) Image processing method and electronic device
CN114882206A (en) Image generation method, model training method, detection method, device and system
CN111524102B (en) Screen dirt detection method and device of liquid crystal display
CN110414542A (en) A kind of fruit flow-paths inspection method and system for vending machine of squeezing the juice automatically
CN111179182B (en) Image processing method and device, storage medium and processor
CN113379683A (en) Object detection method, device, equipment and medium
Ruppel et al. Detection and reconstruction of transparent objects with infrared projection-based RGB-D cameras
Zhai et al. Detection algorithm of rail surface defects based on multifeature saliency fusion method
CN113642565B (en) Object detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant