WO2022135556A1 - 一种清洁机器人及其清洁控制方法 - Google Patents

一种清洁机器人及其清洁控制方法 Download PDF

Info

Publication number
WO2022135556A1
WO2022135556A1 PCT/CN2021/141053 CN2021141053W WO2022135556A1 WO 2022135556 A1 WO2022135556 A1 WO 2022135556A1 CN 2021141053 W CN2021141053 W CN 2021141053W WO 2022135556 A1 WO2022135556 A1 WO 2022135556A1
Authority
WO
WIPO (PCT)
Prior art keywords
stain
cleaning robot
cleaning
robot
degree
Prior art date
Application number
PCT/CN2021/141053
Other languages
English (en)
French (fr)
Inventor
朱松
谭一云
何明明
Original Assignee
苏州宝时得电动工具有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州宝时得电动工具有限公司 filed Critical 苏州宝时得电动工具有限公司
Priority to CN202180014614.3A priority Critical patent/CN115151174A/zh
Publication of WO2022135556A1 publication Critical patent/WO2022135556A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/28Floor-scrubbing machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4094Accessories to be used in combination with conventional vacuum-cleaning devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Definitions

  • the present disclosure relates to the field of automation technology, and in particular, to a cleaning robot and a cleaning control method thereof.
  • Cleaning robots are a kind of smart household appliances. Through path planning, autonomous navigation and other technologies, they can automatically complete the cleaning work in the room.
  • the function of the cleaning robot vision sensor is relatively single, and multiple sets of vision sensors are required to realize multiple functions. Due to the limited size of the cleaning robot, multiple sets of vision sensors will occupy the space of the cleaning robot, and also increase the cost of the cleaning robot.
  • the present disclosure provides a cleaning control method and device for a cleaning robot.
  • a cleaning robot comprising:
  • a motion module disposed on the robot body, configured to support the robot body and drive the cleaning robot to move;
  • a cleaning module disposed on the robot body, configured to clean the work surface
  • a vision sensor arranged on the main body of the robot, for capturing images
  • a control module disposed on the robot body, is configured to adjust the photographing direction of the vision sensor, so that the cleaning robot has at least two different functional modes.
  • the photographing direction of the visual sensor is an oblique upward direction
  • the cleaning robot has a first functional mode
  • the photographing direction of the visual sensor is a horizontal direction or an oblique downward direction, and the cleaning robot has a second functional mode.
  • the first functional mode is a room identification mode
  • the second functional mode is an obstacle avoidance mode and/or a stain recognition mode.
  • the photographing direction of the visual sensor is a horizontal direction
  • the cleaning robot has an obstacle avoidance mode
  • the photographing direction of the vision sensor is obliquely downward, and the cleaning robot has a stain recognition mode.
  • the main body of the robot is further provided with a supplementary light module for enhancing the contrast between the target object and the background image in the image data captured by the vision sensor.
  • a cleaning control method for a cleaning robot including:
  • the visual sensor is adjusted to shoot horizontally or obliquely downward, so that the cleaning robot has a second functional mode.
  • the step of adjusting the visual sensor to shoot obliquely upward, so that the cleaning robot has a first functional mode includes:
  • room identification is performed on the work area.
  • the step of adjusting the visual sensor to shoot horizontally or obliquely so that the cleaning robot has the second functional mode includes:
  • Adjust the vision sensor to shoot horizontally or obliquely downward to detect objects within the detection range of the vision sensor;
  • the height of the stain is less than or equal to a preset height, acquiring the degree of pollution of the stain and controlling the cleaning robot to clean the stain according to a cleaning strategy matching the degree of the stain;
  • the cleaning robot is controlled to perform a preset obstacle avoidance action.
  • the method further includes:
  • the degree of contamination of the stain is checked.
  • checking the degree of contamination of the stain includes:
  • the pollution degree matching the information entropy is determined.
  • the technical solutions provided by the embodiments of the present disclosure may include the following beneficial effects: after the cleaning robot starts working, it is configured to adjust the photographing direction of the vision sensor, so that the cleaning robot has at least two different functional modes.
  • a visual sensor such as a camera shoots obliquely upward, recognizes room attributes, and performs room recognition, so that auxiliary mapping can be realized.
  • the cleaning robot adjusts the shooting angle of the camera to horizontal or oblique downward shooting, performs tasks such as stain identification and obstacle identification, and executes the corresponding mopping strategy or obstacle avoidance strategy.
  • the present disclosure can realize multiple functions of the cleaning robot through a set (herein referred to as a group or one) of vision sensors, and can save the cost of the cleaning robot.
  • FIG. 1 is an application scenario diagram of a cleaning control method for a cleaning robot according to an exemplary embodiment.
  • Fig. 2 is a flowchart showing a cleaning control method of a cleaning robot according to an exemplary embodiment.
  • Fig. 3 is a flow chart of a cleaning control method for a cleaning robot according to an exemplary embodiment.
  • Fig. 4 is a flowchart showing a cleaning control method of a cleaning robot according to an exemplary embodiment.
  • Fig. 5 is a schematic structural diagram of a cleaning robot according to an exemplary embodiment.
  • FIG. 5 is a schematic structural diagram of a cleaning robot according to an exemplary embodiment.
  • the cleaning robot is described in detail by taking the mopping robot as an example. Refer to Figure 5.
  • a cleaning robot comprising:
  • a motion module disposed on the robot body, configured to support the robot body and drive the cleaning robot to move;
  • a cleaning module disposed on the robot body, configured to clean the work surface
  • a control module disposed on the robot body, is configured to adjust the photographing direction of the visual sensor 503, so that the cleaning robot has at least two different functional modes.
  • the visual sensor 503 may be disposed at the front of the robot body, for example, inside the robot casing or outside the casing.
  • the visual sensor 503 may include a camera, and the camera may include a monocular camera, a binocular camera, a wide-angle camera, and the like. In one example, it may also include a combination of cameras and other devices for acquiring three-dimensional image data of the object.
  • the combination of the camera and other equipment can include: a structured light system such as a combination of a camera and a projector, projected onto the surface of an object by a projector, such as laser stripes, Gray codes, sinusoidal stripes, etc., shooting objects through a single or multiple cameras A structured light image is obtained on the surface, and the three-dimensional data of the image is obtained based on the principle of triangulation. It can also include: TOF (Time of Flight) combined with a camera and a laser transmitter, the transmitter emits laser light outward, and the camera receives the laser light reflected by the object to obtain a three-dimensional image of the object.
  • TOF Time of Flight
  • the motion module may be disposed at the lower portion of the robot body 500 .
  • the motion module may include a wheel set and a drive motor for driving the movement of the wheel set.
  • the wheel set includes a drive wheel driven by a traveling motor and an auxiliary wheel that assists in supporting the housing.
  • the motion module may include a crawler structure,
  • the travel motor can be directly connected to the drive wheel, and the right drive wheel and the left drive wheel are each connected to a travel motor to realize differential output control steering; in another example, the travel motor can also be provided with a transmission device, That is, the same motor drives the right driving wheel and the left driving wheel through different transmission devices to realize differential output control steering.
  • the cleaning module may be disposed at the lower portion of the robot body 500 .
  • the cleaning module is configured to clean the work surface.
  • the cleaning module may be, for example, a mopping module.
  • the mopping module is provided with a water tank 501, a water pump 502 and a cleaning medium,
  • the water pump 502 is used for transferring the water in the water tank 501 to the cleaning medium or directly spraying it on the ground to be cleaned, so that the cleaning medium can clean;
  • the cleaning medium can be, for example, a mop, mop paper or sponge.
  • the cleaning medium can be disposable or reusable.
  • control module may be disposed inside the robot body 500 .
  • the control module is electrically connected with the motion module and the cleaning module, and controls the movement and operation of the cleaning robot 500 .
  • the control module is configured to adjust the photographing direction of the vision sensor so that the cleaning robot has at least two different functional modes.
  • the vision sensor may cooperate with an adjustment mechanism to adjust the shooting direction of the cleaning robot, and the adjustment mechanism may include a rotating shaft driven by a servo motor or the like.
  • the functional modes include functions or modes possessed by the cleaning robot, such as map building, cleaning, obstacle recognition, tracking, and the like.
  • the robot body 500 further includes a processor configured to execute the robot cleaning control method described in any embodiment of the present disclosure, a vision sensor is provided on the front side of the mopping robot, and the vision sensor is connected with the robot.
  • the control module is electrically connected, and the vision sensor can be installed on the robot in a detachable or fixed manner.
  • the robot body 500 may further include a battery pack for supplying power for the movement and operation of the robot, and the battery pack is fixedly or detachably installed in the main body, for example, in a casing.
  • the battery pack releases electrical energy to maintain the mopping robot 500 to work and move.
  • the battery can be connected to an external power source to supplement the power; the mopping robot 500 can also automatically find the base station to remove, replace or clean the cleaning medium when it detects that the cleaning medium is dirty, so that the cleaning medium can be converted into clean state.
  • the photographing direction of the visual sensor is an oblique upward direction
  • the cleaning robot has a first functional mode
  • the photographing direction of the visual sensor is a horizontal direction or an oblique downward direction, and the cleaning robot has a second functional mode.
  • the vision sensor is one or two.
  • the switching of the incapable mode can be realized by adjusting the adjusting mechanism.
  • the two visual sensors can correspond to different functional modes in advance, for example, one of them shoots obliquely upward, so that the cleaning robot has a room recognition mode, so as to realize the function of room recognition, and the other obliquely shoots up. Shoot down, so that the cleaning robot has a stain recognition mode, so as to realize the stain recognition function;
  • At least one of the two visual sensors can be adjusted, so that the cleaning robot has more different functional modes, and can switch between different functional modes by adjusting the adjustable visual sensor;
  • the control module can adjust the shooting direction of the vision sensor through an adjustment mechanism connected with the vision sensor, thereby switching to different functional modes.
  • one of the two vision sensors is adjustable and the other is not, the adjustable vision sensor can be adjusted to the shooting direction of the other vision sensor, so that when the other vision sensor fails, the The function corresponding to the faulty vision sensor enables the cleaning robot to still work normally.
  • both vision sensors are adjustable.
  • the advantage of this is that when one vision sensor fails to function, the other vision sensor can be used as a backup.
  • the cleaning robot can be made to realize more functional modes, and at the same time, the damage of a vision sensor can be prevented from affecting the normal work of the cleaning robot.
  • both vision sensors are adjustable, and the adjustable ranges of the two adjustable vision sensors may be the same, partially the same, or completely different.
  • the two sensors may be arranged up and down with a preset interval, or may be arranged horizontally with a preset interval, which is not limited in the present disclosure.
  • the adjustable ranges of the two adjustable vision sensors are at least partially the same, in other words, there is at least a partial overlap in the functional modes between the two vision sensors, so that the overlapping mode functions can be enhanced, for example, the range can be obtained.
  • the detection results of the two sensors can also be mutually verified, thereby improving the detection accuracy of overlapping patterns.
  • one of the vision sensors is adjustable between the horizontal direction and the diagonally upward direction; the other vision sensor is adjustable between the horizontal direction and the diagonally downward direction; that is, both vision sensors can be adjusted to the horizontal direction , making the shooting range and results in the horizontal direction more accurate.
  • the advantage of doing so is that it can avoid that the adjustment angle of the vision sensor is too large and the life of the vision sensor and/or the adjustment mechanism connected to it is shortened.
  • the visual sensor has an auto-focus function, which can realize auto-focus during the mode switching process, so as to obtain a clear image and improve the recognition accuracy.
  • the oblique upward direction may include a direction above the horizontal plane where the vision sensor is located, and the horizontal direction or oblique downward direction may include a plane where the vision sensor is located and a direction below the horizontal plane.
  • the oblique upward direction includes the upward direction deviating from the horizontal plane or the horizontal line
  • the oblique downward direction refers to the downward direction deviating from the horizontal plane or the horizontal line.
  • the first functional mode is different from the second functional mode. Adjust the shooting angle of the visual sensor to an oblique upward direction to obtain more room attribute information, the room attributes are used to characterize the use of the room, such as bedroom, living room and kitchen, etc.
  • the first functional mode may include a room identification mode, for example, to perform room attribute identification of the to-be-worked area, assist in creating a work map, and store it. In future work, path planning, navigation, etc. can be performed based on the work map.
  • a room identification mode for example, to perform room attribute identification of the to-be-worked area, assist in creating a work map, and store it. In future work, path planning, navigation, etc. can be performed based on the work map.
  • the work map is used to limit the work area of the cleaning robot, where the cleaning robot drives and cleans the surface of the work area.
  • the room identification mode may further include room attribute identification. By identifying the room attribute before or during the cleaning work, the position of the robot can be checked.
  • the setting method of the first function mode is not limited to the above examples, for example, the function or mode of shooting the visual sensor obliquely upward to perform target tracking, those skilled in the art, under the inspiration of the technical essence of the present application, can also Other changes may be made, but as long as the functions and effects achieved are the same or similar to those of the present application, they shall all be covered within the protection scope of the present application.
  • the shooting angle of the visual sensor is adjusted to a horizontal direction or an oblique downward direction, so as to obtain more information on the working surface.
  • the second functional mode may include a stain recognition mode, and the visual sensor may be used to identify stains in various ways.
  • the stains may be identified through an image segmentation method based on a neural network model.
  • feature information of an image such as texture information and gray value information
  • the method of identifying stains is not limited to the above examples.
  • the color information image segmentation technology based on images those skilled in the art may make other changes under the inspiration of the technical essence of the present application, but as long as it is realized The same or similar functions and effects as those of the present application shall be covered by the protection scope of the present application.
  • the second functional mode may include an obstacle avoidance mode.
  • the robot is controlled to perform a preset obstacle avoidance action.
  • the preset obstacle avoidance action may include when the position and direction of the obstacle is detected, for example, if the obstacle is located in the forward direction of the automatic walking device, the automatic walking device can be controlled to turn according to the preset direction. (turn left or turn right) and move forward a preset distance, then follow the opposite direction from before (if you turn left after encountering an obstacle before, then turn right now; if you encounter an obstacle before, turn right turn, then turn left at this point) turn and go ahead to get around the obstacle.
  • the preset obstacle avoidance measure may further include: if the obstacle is located in the forward direction of the automatic traveling device, controlling the automatic traveling device to turn in a preset direction.
  • the setting of the second functional mode is not limited to the above examples.
  • the second functional mode may also include robot navigation, positioning, etc., under the inspiration of the technical essence of the present application, those skilled in the art may also do Other changes may be made, but as long as the realized functions and effects are the same as or similar to those of the present application, they shall all be covered within the protection scope of the present application.
  • the visual sensor in the process of adjusting the visual sensor, it can have different functions; for example, the visual sensor can shoot obliquely upward, and the visual sensor can be used for positioning and room recognition, such as acquiring more room attribute features , so that the cleaning robot has room recognition mode or room recognition function; while the visual sensor is horizontally shot, the visual sensor can be used for positioning and object recognition, such as obtaining more environmental information during the cleaning robot movement, such as identifying the cleaning robot encountered during the movement process.
  • the obstacle so that the cleaning robot has obstacle avoidance mode or obstacle avoidance function; when the vision sensor is shooting obliquely downward, the vision sensor can be used for positioning and stain recognition, such as obtaining more information on the ground to be cleaned, so that the cleaning robot has pollution Object (such as stains) recognition pattern or contamination recognition function.
  • pollution Object such as stains
  • the shooting angle of the visual sensor will not be repeatedly adjusted within two relatively large ranges.
  • the shooting angle of the vision sensor can be adjusted in a small range.
  • the vision sensor can make some adjustments with the direction of the stain, so that the stain occupies the center of the captured image, or the stain takes a larger proportion of the captured image.
  • the photographing direction of the visual sensor is a horizontal direction, and the cleaning robot has an obstacle avoidance mode; the photographing direction of the visual sensor is an oblique downward direction, and the cleaning robot has a stain recognition mode.
  • the photographing direction of the visual sensor is the horizontal direction, more characteristics of the obstacle, such as shape and size, can be obtained, which is more conducive to the realization of the obstacle avoidance mode.
  • the photographing direction of the visual sensor is an oblique downward direction, more working surface information can be obtained, which is beneficial to the identification of stains.
  • the main body of the robot is further provided with a supplementary light module for enhancing the contrast between the target object and the background image in the image data captured by the vision sensor.
  • the target object may include obstacles, stains, furniture, etc.
  • the background image includes an image area other than the target object in the image data.
  • the emission direction of the supplementary light module can be different from the shooting direction of the visual sensor, and the light wave is emitted to the obstacle through the supplementary light module.
  • the target object is taken as an example to illustrate, to highlight the diffuse reflection on the surface of the stain and enhance the stain. The contrast with the background image makes the identification of later stains more accurate.
  • the supplementary light module may include LED lamp beads, reflectors, and the like.
  • the supplementary light module can be selectively turned on to perform supplementary light to assist in shooting the target object; it should be noted that the light source of the supplementary light module is designed in multiple positions in front of the cleaning robot (such as the front left, Front middle and right front), beams are emitted from multiple different positions, and the results of stain recognition are different, so as to further improve the contrast between the stain and the background and improve the recognition effect.
  • Fig. 4 is a flowchart showing a cleaning control method of a cleaning robot according to an exemplary embodiment.
  • the method for cleaning a robot includes the following steps:
  • Step S401 adjusting the visual sensor to shoot obliquely upward, so that the cleaning robot has a first functional mode
  • Step S402 and/or, adjust the visual sensor to shoot horizontally or obliquely downward, so that the cleaning robot has a second functional mode.
  • FIG. 1 is an application scenario diagram of a cleaning control method for a cleaning robot according to an exemplary embodiment.
  • Fig. 2 is a flowchart showing a cleaning control method of a cleaning robot according to an exemplary embodiment.
  • the cleaning robot 100 is provided with a vision sensor with an adjustable shooting angle.
  • the cleaning robot 100 adjusts the shooting angle of the visual sensor to an oblique upward direction to obtain more room attribute information, which is used to represent the purpose of the room, such as bedroom, living room and kitchen, etc. .
  • the work map is used to limit the work area of the cleaning robot 100, where the cleaning robot 100 drives and cleans the surface of the work area. After the cleaning robot completes the mapping, it performs normal cleaning work (such as mopping the floor). During the normal cleaning process, the cleaning robot can perform obstacle avoidance/stain recognition functions and object recognition functions at the same time.
  • the vision sensor when there is one vision sensor, adjust the vision The sensor is slightly tilted downward, so as to realize the above two functions; here, the horizontal tilt is slightly downward, for example, the shooting direction of the visual sensor (such as a camera) is below the horizontal line and the angle with the horizontal line is in the range of 1-15 degrees; or,
  • the visual sensor can be tilted slightly downward to realize the above two functions; or one of the visual sensors can be looked up to realize the function of obstacle avoidance/stain recognition, and the other visual sensor can be tilted slightly downward.
  • the cleaning robot 100 adjusts the shooting angle of the visual sensor to an oblique downward direction, uses the visual sensor to detect the stain, and judges the height of the stain, if the height of the stain is less than or equal to the preset threshold, for example: the heights of the first stain 102 and the second stain 103 are less than or equal to the preset threshold, then the area of the stain is further judged.
  • the cleaning robot is controlled to use a relatively large cleaning force to clean the first stain; if the area of the stain is relatively small, such as the second stain 103, the control The cleaning robot cleans the second stains 103 with a small cleaning force. If the height of the stain is greater than a preset threshold, such as the small ball 104 in FIG. 1 , the cleaning robot is controlled to perform a preset obstacle avoidance action to avoid the small ball 104 .
  • the cleaning robot can determine a corresponding cleaning strategy according to the degree of contamination of the stain, and can obtain a better cleaning effect.
  • the beneficial effects of the embodiments of the present disclosure include: after the cleaning robot starts working, a visual sensor such as a camera is taken obliquely upward to identify room attributes, perform room identification, and assist in mapping.
  • the cleaning robot adjusts the shooting angle of the camera to shoot horizontally or obliquely downward, performs tasks such as stain identification, obstacle identification, etc., and executes the corresponding mopping strategy or obstacle avoidance strategy, in which the identification of stains and the identification of obstacles are performed at the same time.
  • multiple functions of the cleaning robot can be realized through a set (for example, it may be one or a group) of vision sensors, and the cost of the cleaning robot can be saved.
  • the step S401 includes:
  • room identification is performed on the work area.
  • the visual sensor is adjusted to shoot obliquely upward, so as to obtain image data around the working area of the cleaning robot, and image feature extraction is performed on the image data to identify objects in the surrounding environment of the working area, such as desks and chairs , sofa, bed, cabinet, washing machine, etc. to identify different room attributes. For example, there are more sofas, tables and chairs in the living room, more beds and cabinets in the bedroom, more washing machines in the bathroom, and more refrigerators in the kitchen.
  • the image feature extraction method may include: using a feature extraction method of histogram of Oriented Gradient (HOG), extracting the shape edge of the image, and comparing the shape edge with a preset object shape match to identify the object.
  • HOG histogram of Oriented Gradient
  • LBP Local Binary Pattern
  • the visual sensor can be used to shoot obliquely upward, the position of the cleaning robot from the wall or home has been determined, and the cleaning robot is controlled to drive along the wall or the edge of the home to perform room recognition, so as to realize auxiliary construction. graph task.
  • FIG. 3 is a flowchart of a cleaning control method for a cleaning robot according to an exemplary embodiment.
  • step S402 adjust the visual sensor to shoot horizontally or obliquely downward, so that the cleaning robot has a second functional mode, including:
  • Adjust the vision sensor to shoot horizontally or obliquely downward to detect objects within the detection range of the vision sensor;
  • the height of the stain is less than or equal to a preset height, acquiring the degree of pollution of the stain and controlling the cleaning robot to clean the stain according to a cleaning strategy matching the degree of the stain;
  • the cleaning robot is controlled to perform a preset obstacle avoidance action.
  • the cleaning robot when it is determined that there is no stain within the detection range of the visual sensor, the cleaning robot is controlled to execute the first cleaning strategy; Contamination degree inspection, and the inspection result shows that the degree of contamination is light or there is no abnormality (such as a misjudgment of the degree of contamination), the cleaning robot is controlled to execute the first cleaning strategy.
  • the visual sensor when the cleaning robot performs cleaning work, the visual sensor is adjusted to shoot horizontally or obliquely downward to detect objects within the detection range of the visual sensor, and the visual sensor is used to identify contaminants (such as stains) , in other words, when the cleaning robot is cleaning, it adjusts the vision sensor to shoot horizontally or obliquely downward, detects objects within its field of view through the vision sensor, and then determines whether the objects are pollutants (such as stains).
  • the cleaning robot is controlled to execute the obstacle avoidance strategy. If the height of the stain is less than the preset height, the cleaning robot is controlled to judge the degree of contamination of the stain, or in other words, the cleaning robot is controlled to judge the amount of the stain (such as the area of the stain). If the amount of the stain is small, the second cleaning strategy is executed. If the amount of stains is large, the third cleaning strategy is executed; wherein, the obstacle avoidance strategy in the embodiments of the present disclosure has been described in the above embodiments, and will not be described here again.
  • the above-mentioned cleaning robot may be a sweeping robot, a mopping robot, or an integrated sweeping and mopping robot.
  • the first cleaning strategy corresponds to the first mopping robot.
  • the corresponding mopping times can be, for example, 1-2 times
  • the second cleaning strategy corresponds to a second mopping strategy, such as a moderate cleaning strategy, and the corresponding mopping times can be 4, for example. -6 times
  • the third cleaning strategy corresponds to the third mopping strategy, such as a heavy cleaning strategy, and the corresponding mopping times may be, for example, 8-12 times.
  • the method for contaminant identification is configured to be obtained as follows:
  • the image data is input into a stain recognition model, and a stain recognition result is output through the stain recognition model, wherein the stain recognition model is set to be obtained by training using the corresponding relationship between the image data and the stain recognition result.
  • the stain recognition model is set to use the correspondence between the image data and the stain recognition result to be obtained by training, which may include: pre-collecting a set of image samples containing stains, in order to improve the robustness of the training result,
  • the image samples may be images under different lighting, angular focus conditions.
  • the image samples of the stains are marked, and the contours of the stains are marked.
  • the stain recognition model may include a model based on a convolutional neural network, and the convolutional neural network model is constructed, and training parameters are set in the convolutional neural network model, and image samples of the stains are respectively input to the stains. Identify models and generate predictions.
  • the prediction results may include: the prediction result of the pixel position corresponding to the stain in the image is a, and the prediction result of the pixel position corresponding to the non-stain in the image is b, wherein the values of a and b can be preset. In one example, You can set a to 1 and b to 0. According to the difference between the prediction result and the image samples marked with stains, the training parameters are iteratively adjusted until the difference meets a preset requirement.
  • the models can be trained separately for different working surface materials.
  • the work surface material may include wood floors, ceramic tiles, marble, and the like. Separate training can reduce the complexity of model training and improve the accuracy of the output results. Before carrying out stain identification, first judge the current working surface material, and then determine the corresponding stain identification model to reduce misjudgment and improve the accuracy of stain identification.
  • inputting the image data into a stain recognition model, and outputting a stain recognition result via the stain recognition model includes:
  • the identification results of which the number of the same identification results is greater than the preset value are regarded as the identification results of stains.
  • a plurality of image data within the detection range captured by a plurality of shooting positions can be input into the stain recognition model, and the detection result of the detection can be output.
  • the identification results of stains within the range the identification results of the stains with the same identification results accounting for a larger proportion are taken as the identification results. In an example, for example, 5 images are recognized, and when the recognition results of 4 or more images are stains, it is determined that the recognition results are stains.
  • the image data is input into the pre-trained stain recognition model, and the stain recognition model outputs a stain recognition result.
  • the recognition result may include no stain, a stain, and a location area marked with a stain.
  • the degree of pollution of the stain when the height of the stain is less than or equal to a preset height, the degree of pollution of the stain is obtained, and the method for obtaining the degree of pollution may include:
  • the pollution degree is determined according to the relationship between the area proportion of the stain and the pollution degree in the identification result.
  • the area ratio of the stains includes the ratio of the sum of the total areas of multiple stains in the same detection range to the detection area.
  • the recognition result output by the stain recognition model may include the prediction result of the pixel position corresponding to the stain in the image as a, and the prediction result of the pixel position corresponding to the non-stain in the image as b, where the values of a and b can be preset, and in a In the example, you can set a to 1 and b to 0. Therefore, the ratio of the amount of data that can be predicted as a result to the total amount of data of the predicted result can reflect the size of the contaminated area of the stain.
  • a relationship is established between the proportion of stained image pixels and the degree of pollution. According to the proportion of the stained image pixels in the identification result, the degree of contamination corresponding to the proportion is determined.
  • the area ratio of stains can be determined based on the recognition result data output by the stain recognition model, without performing more complicated calculations on the recognition result data.
  • the degree of contamination of the stains in the embodiments of the present disclosure can be expressed by the appearance characteristics of the stains, such as the area size of the stains, the color depth of the stains, the height or thickness of the stains, and the like. In general, the larger the stain area, the darker the stain, and the greater the height or thickness of the stain, the greater the degree of contamination.
  • the obtaining of the degree of contamination of the stain may be achieved in the following manner.
  • a corresponding relationship between the pollution degree level and the stain area may be established, and the pollution degree matching the stain area is determined by obtaining the actual area of the stain or the proportion of image pixels.
  • a corresponding relationship between the pollution degree level and the color brightness, chromaticity and/or saturation of the stain can be established, and by acquiring the color information of the stain, the pollution degree matching the color information is determined.
  • a corresponding relationship between the pollution degree level and the height of the stain can also be established, and by obtaining the height of the stain, the degree of pollution that matches the height is determined.
  • the method of the degree of stain pollution is not limited to the above examples.
  • the grayscale feature of the image is used to determine the degree of stain pollution.
  • Those skilled in the art may make other changes under the inspiration of the technical essence of the present application. However, as long as the realized functions and effects are the same or similar to those of the present application, they shall all be covered within the protection scope of the present application.
  • the cleaning strategy may include a cleaning method and a cleaning intensity
  • the cleaning method may include mopping, sweeping, or vacuuming, etc.
  • the cleaning intensity may include light cleaning, moderate cleaning, and heavy cleaning, etc.
  • the light cleaning such as mopping the floor 1-2 times
  • the moderate cleaning such as mopping the floor 4-6 times
  • the heavy cleaning such as mopping the floor 8-12 times.
  • the light cleaning is eg sweeping
  • the moderate cleaning is eg sweeping and vacuuming
  • the heavy cleaning is eg sweeping and vacuuming and mopping.
  • the cleaning strategy is not limited to the above examples. For example, it can also be set according to a combination of various cleaning methods, cleaning times or cleaning time. Under the inspiration of the technical essence of the present application, those skilled in the art may also make Other changes, as long as their realized functions and effects are the same or similar to those of the present application, shall be covered within the protection scope of the present application.
  • the matching relationship between the degree of soiling and the cleaning strategy may be preset.
  • the controlling the cleaning robot to clean the stains according to the cleaning strategy matching the degree of the stains includes controlling the cleaning robot to clean the stains itself according to the cleaning strategy matching the degree of the stains, and may also include cleaning
  • the area within the preset range of stains is not limited in this application.
  • the present disclosure controls the cleaning robot to clean the stains according to a cleaning strategy that matches the stains.
  • the heavy cleaning method can effectively clean the ground stains and save the energy of the robot.
  • the method further includes:
  • the degree of contamination of the stain is checked.
  • the checking of the contamination degree of the stains in an example, when the contamination degree is greater than a preset threshold, the checking of the contamination degree of the stains may include:
  • the pollution degree matching the information entropy is determined.
  • the information entropy of an image can represent the content of the image, for example, an entirely white image has 0 information entropy, and for example, an entirely black image has 0 information entropy.
  • the information entropy is no longer 0, and for denser black and white images, the information entropy value will increase. Therefore, when the cleaning robot is driving on the empty ground, the information entropy of the captured image is relatively small, and if it encounters stains, the information entropy of the captured image is relatively large, and the larger the area of the stain, the larger the information entropy of the captured image. The larger the pollution degree is, the greater the obtained image information entropy is.
  • the information entropy of the image data obtained by the visual sensor within the detection range can be calculated by the method for calculating the image information entropy in the prior art, which is not repeated in this disclosure.
  • the contamination degree of the stain is relatively serious, and when the information entropy is less than or equal to the preset value, the contamination degree of the stain is relatively light.
  • the inspection method may further include: inspecting the degree of contamination using the appearance parameter of the stain.
  • the appearance parameters may include the thickness of the stain, the area of the stain, the texture of the stain, the color of the stain, and the like.
  • checking the degree of contamination of the stain may include:
  • the appearance parameters of the pollutants are extracted from the image data, and the appearance parameters are compared with reference values to determine the degree of pollution of the pollutants.
  • the appearance parameters may include: the grayscale of the pollutant, the texture of the pollutant, and the color of the pollutant.
  • the appearance parameter can be obtained according to any method for extracting graphic features in the above-mentioned embodiments, and the appearance parameter is compared with a reference value. If the appearance parameter value is greater than the reference value, the appearance parameter The degree of contamination of the stain is relatively serious, and if the value of the appearance parameter is less than the reference value, the degree of contamination of the stain is relatively light.
  • checking the pollution degree of the stains may further include: combining map information, checking the pollution degree of the pollutants, for example, the cleaning robot can check the pollution degree of the stains according to the map information.
  • Map information to determine its location for example, when the location includes textured ground, such as near a threshold with marble, false detection may be caused by marble. Through this method, the degree of contamination of pollutants can be tested.
  • the degree of contamination of the stain in the case that the degree of contamination of the stain is relatively heavy, the degree of contamination may be checked by any method in the above-mentioned embodiments. In order to prevent the cleaning robot from performing deep cleaning of the stains in the case of false detection, thereby affecting the work efficiency.

Abstract

一种清洁机器人及其清洁控制方法。清洁机器人上设置有视觉传感器(503),包括:机器人主体(500),视觉传感器(503),设置于机器人主体(500)上,用于拍摄图像;运动模块,设置于机器人主体(500)上,配置为支撑机器人主体(500)并带动清洁机器人运动;清洁模块,设置于机器人主体(500)上,配置为对工作表面进行清洁;控制模块,设置于机器人主体(500)上,配置为调节视觉传感器(503)的拍摄方向,使得清洁机器人具有至少两种不同的功能模式。通过一套视觉传感器(503),便能够实现清洁机器人的多个功能,能够节省清洁机器人的成本。

Description

一种清洁机器人及其清洁控制方法 技术领域
本公开涉及自动化技术领域,尤其涉及一种清洁机器人及其清洁控制方法。
背景技术
清洁机器人,是智能家用电器的一种,通过路径规划、自主导航等技术,能够自动的在房间内完成清洁工作。相关技术中,清洁机器人视觉传感器的功能较为单一,如果实现多种功能需要多组视觉传感器。由于清洁机器人体积有限,多组视觉传感器会占用清洁机器人的空间,并且还增加了清洁机器人的成本。
发明内容
为克服相关技术中存在的问题,本公开提供一种清洁机器人的清洁控制方法和装置。
根据本公开实施例的第一方面,提供了一种清洁机器人包括:
机器人主体,
运动模块,设置于所述机器人主体上,配置为支撑所述机器人主体并带动所述清洁机器人运动;
清洁模块,设置于所述机器人主体上,配置为对工作表面进行清洁;
视觉传感器,设置于所述机器人主体上,用于拍摄图像;
控制模块,设置于所述机器人主体上,配置为调节所述视觉传感器的拍摄方向,使得所述清洁机器人具有至少两种不同的功能模式。
在一种可能的实现方式中,所述视觉传感器的拍摄方向为斜向上方向,所述清洁机器人具有第一功能模式;
所述视觉传感器的拍摄方向为水平方向或斜向下方向,所述清洁机器人具有第二功能模式。
在一种可能的实现方式中,所述第一功能模式为房间识别模式;
所述第二功能模式为避障模式和/或污渍识别模式。
在一种可能的实现方式中,所述视觉传感器的拍摄方向为水平方向,所述清洁机器人具有避障模式;
所述视觉传感器的拍摄方向斜向下方向,所述清洁机器人具有污渍识别模式。
在一种可能的实现方式中,所述机器人主体上还设置有补光模块,用于增强视觉传感 器拍摄的图像数据中目标物体与背景图像的对比度。
根据本公开实施例的第二方面,提供一种清洁机器人的清洁控制方法,包括:
调节所述视觉传感器斜向上拍摄,使得所述清洁机器人具有第一功能模式;
和/或,调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式。
在一种可能的实现方式中,调节所述视觉传感器斜向上拍摄,使得所述清洁机器人具有第一功能模式的步骤,包括:
将所述视觉传感器调节为斜向上拍摄,以获取清洁机器人工作区域周围环境的图像数据;
根据所述图像数据,对所述工作区域进行房间识别。
在一种可能的实现方式中,所述调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式的步骤,包括:
将所述视觉传感器调整为水平或斜向下拍摄,以检测所述视觉传感器检测范围内的物体;
在确定所述视觉传感器检测范围内存在污渍的情况下,检测所述污渍的高度信息;
在所述污渍的高度小于或等于预设高度的情况下,获取所述污渍的污染程度并控制所述清洁机器人按照与所述污渍程度相匹配的清洁策略清理所述污渍;
在所述污渍的高度大于预设高度的情况下,控制所述清洁机器人执行预设的避障动作。
在一种可能的实现方式中,所述方法还包括:
在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验。
在一种可能的实现方式中,所述在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验,包括:
利用所述视觉传感器获取所述检测范围内的图像数据;
从所述图像数据中提取所述污渍的外观参数,将所述外观参数与参照值做比较以确定所述污渍的污染程度;
和/或,
利用所述视觉传感器获取所述检测范围内的图像数据;
获取所述图像数据的信息熵;
根据信息熵与污染程度的预设关联关系,确定与所述信息熵相匹配的污染程度。
本公开的实施例提供的技术方案可以包括以下有益效果:清洁机器人启动工作以后,配置为调节所述视觉传感器的拍摄方向,使得所述清洁机器人具有至少两种不同的功能模式。例如:视觉传感器如相机斜向上拍摄,识别房间属性,进行房间识别,从而可以实现辅助建图。清洁机器人将相机的拍摄角度调整为水平或斜向下拍摄,进行污渍的识别、障碍物的识别等工作,执行对应的拖地策略或避障策略。本公开通过一套(这里指一组或一个)视觉传感器,便能够实现清洁机器人的多个功能,能够节省清洁机器人的成本。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的应用场景图。
图2是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。
图3是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。
图4是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。
图5是根据一示例性实施例示出的一种清洁机器人结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图5是根据一示例性实施例示出的一种清洁机器人结构示意图。在下面的具体实施例中,清洁机器人以拖地机器人为例,进行详细说明。参考图5所示。一种清洁机器人,包括:
机器人主体500,
视觉传感器503,设置于所述机器人主体500上,用于拍摄图像;
运动模块,设置于所述机器人主体上,配置为支撑所述机器人主体并带动所述清洁机器人运动;
清洁模块,设置于所述机器人主体上,配置为对工作表面进行清洁;
控制模块,设置于所述机器人主体上,配置为调节所述视觉传感器503的拍摄方向, 使得所述清洁机器人具有至少两种不同的功能模式。
本公开实施例中,所述视觉传感器503可以设置于所述机器人主体的前部,例如机器人壳体内部或者壳体外部。所述视觉传感器503可以包括相机,所述相机可以包括单目摄像头、双目摄像头和广角摄像头等。在一个示例中所述还可以包括相机与其它设备的组合,用于获取物体的三维图像数据。所述相机与其它设备的组合,可以包括:如相机与投射器相结合结构光系统,通过投射器投射到物体表面,如激光条纹、格雷码、正弦条纹等,通过单个或多个相机拍摄物体表面得到结构光图像,基于三角测量原理获得图像的三维数据。还可以包括:相机与激光发射器相结合的TOF(Time of Flight),由发射器向外发射激光,通过相机接收物体反射的激光以获取物体的三维图像。
本公开实施例中,所述运动模块可以设置于机器人主体500的下部。例如运动模块可以包括轮组和驱动轮组运动的驱动马达,通常轮组包括由行走马达驱动的驱动轮和辅助支撑壳体的辅助轮,可以理解的是,所述运动模块可以包括履带结构,在一个示例中,行走马达可以直接连接驱动轮,右驱动轮和左驱动轮各自配接一个行走马达,以实现差速输出控制转向;在另一个示例中,行走马达也可以通过设置传动装置,即同一个马达通过不同的传动装置驱动右驱动轮和左驱动轮,以实现差速输出控制转向。
本公开实施例中,所述清洁模块可以设置于机器人主体500的下部。清洁模块被配置为对工作表面进行清洁,在本公开实施例中,所述清洁模块例如可以是拖地模块,在一个示例中,所述拖地模块设有水箱501、水泵502及清洁介质,所述水泵502用于将所述水箱501内的水传输至所述清洁介质或者直接喷扫到待清洁地面上,以便清洁介质清洁;上述的清洁介质例如可以是拖布、拖布纸或者海绵。需要指出的是,清洁介质可以是一次性的,也可以是反复使用的。
本公开实施例中,所述控制模块可以设置于机器人主体500的内部。所述控制模块与运动模块和清洁模块电性连接,控制所述清洁机器人500移动和作业。所述控制模块被配置为调节所述视觉传感器的拍摄方向,使得所述清洁机器人具有至少两种不同的功能模式。
在一个示例中,所述视觉传感器可以配合调节机构以调节清洁机器人的拍摄方向,所述调节机构可以包括由伺服电机驱动的转动轴等。所述功能模式包括清洁机器人具有的功能或模式,如构建地图、清洁、障碍物识别、跟踪等。所述机器人主体500还包括处理器,所述处理器被配置为执行本公开任一实施例所述的机器人清洁控制方法,所述拖地机器人前侧设置有视觉传感器,所述视觉传感器与所述控制模块电性连接,所述视觉传感器 可以以可拆卸或固定的方式安装在所述机器人上。
本公开实施例中,所述机器人主体500还可以包括电池包,用于为所述机器人的移动和作业供电,所述电池包固定或可拆卸的安装于主体,例如安装于壳体内。在工作时,电池包释放电能以维持拖地机器人500工作和移动。在非工作时,电池可以连接到外部电源以补充电能;拖地机器人500也可以在探测到清洁介质变脏时,自动地寻找基站拆卸、更换或者清洗清洁介质,以使得所述清洁介质转变为干净状态。
在一种可能的实现方式中,所述视觉传感器的拍摄方向为斜向上方向,所述清洁机器人具有第一功能模式;
所述视觉传感器的拍摄方向为水平方向或斜向下方向,所述清洁机器人具有第二功能模式。
在一个示例中,所述视觉传感器为一个或两个。
当所述视觉传感器为1个时,通过调节机构的调节实现不能模式的切换。
当所述视觉传感器为2个时,两个所述视觉传感器可以预先对应不同的功能模式,例如其中一个斜向上拍摄,使得所述清洁机器人具有房间识别模式,从而实现房间识别功能,另一个斜向下拍摄,使得所述清洁机器人具有污渍识别模式,从而实现污渍识别功能;
在一个示例中,两个视觉传感器中至少存在一个视觉传感器可调节,使得清洁机器人具有更多种不同的功能模式,且可以通过调节该可调节的视觉传感器在不同的功能模式之间进行切换;例如控制模块可通过与视觉传感器相连接的调节机构来调节视觉传感器的拍摄方向,从而切换到不同的功能模式。
在一个示例中,两个视觉传感器中一个视觉传感器可调节,另一个视觉传感器不可调节,该可调节的视觉传感器可以调节至另一个视觉传感器的拍摄方向,以便当另一个视觉传感器故障时,起到该故障的视觉传感器所对应的功能,使得清洁机器人仍能够进行正常工作。
在一个示例中,两个视觉传感器均可调节,这样做的好处在于,当某一个视觉传感器因故障无法实现功能时,另一个视觉传感器可以作为备用,通过将每个视觉传感器设置为可调节,可以使得清洁机器人实现更多的功能模式,同时避免一个视觉传感器损坏影响到清洁机器人的正常工作。
在一个示例中,两个传感器在一个示例中,两个视觉传感器均可调节,且两个可调节的视觉传感器的可调节范围可以相同也可以部分相同,还可以完全不同。
需要指出的是,两个传感起可以以预设间距上下布置,也可以以预设间距水平布置, 对此本公开不做限制。
可选地,两个可调节的视觉传感器的可调节范围至少部分相同,换言之,两个视觉传感起之间的功能模式至少存在部分重叠,这样,重叠的模式功能可以得到增强,例如范围得到扩展,此外,两个传感器的检测结果之间还可以相互验证,从而提高了重叠模式的检测精度。例如其中一个视觉传感器在水平方向和斜向上的方向之间可调;另一个视觉传感器在水平方向和斜向下的方向之间可调;也就是说,两个视觉传感器均可以调节到水平方向,使得水平方向的拍摄范围和结果更加精确。这样做的好处在于,可以避免视觉传感器的调节角度过大导致视觉传感器和/或与其连接的调节机构的寿命变短。
在一个示例中,所述视觉传感器具有自动对焦功能,能够在模式切换的过程中,实现自动对焦,以便获取清晰的图像,提高识别精度。
本公开实施例中,所述斜向上方向可以包括视觉传感器所在水平面以上方向,所述水平方向或斜向下方向可以包括视觉传感器所在平面以及水平面以下方向。换言之,所述斜向上包括偏离水平面或水平线向上方向,所述斜向下方向是指偏离水平面或水平线向下的方向。本公开实施例中,所述第一功能模式不同于第二功能模式。将所述视觉传感器的拍摄角度调整为斜向上方向,以获得更多的房间属性信息,所述房间属性用于表征房间的用途,例如卧室、客厅和厨房等,因此,在一个示例中,所述第一功能模式可以包括房间识别模式,例如对待工作区域进行房间属性识别,辅助创建工作地图,并进行存储。在以后的工作中,可以基于所述工作地图进行路径规划,导航等。在创建工作地图时。所述工作地图用于限制清洁机器人的工作区域,所述清洁机器人在所述工作区域进行行驶,并清洁所述工作区域的表面。本实施例中,房间识别模式还可以包括房间属性识别,通过在清洁工作前或工作的过程中,识别房间的属性,可以对机器人的位置进行检验。需要说明的是,所述第一功能模式的设置方式不限于上述举例,例如,将视觉传感器斜向上拍摄以进行目标跟踪的功能或模式,所属领域技术人员在本申请技术精髓的启示下,还可能做出其它变更,但只要其实现的功能和效果与本申请相同或相似,均应涵盖于本申请保护范围内。本公开实施例中,将所述视觉传感器的拍摄角度调整为水平方向或斜向下方向,以获得更多的工作表面的信息。在一个示例中所述第二功能模式可以包括污渍识别模式,利用视觉传感器识别污渍的方式可以包括多种,在一个示例中,可以通过基于神经网络模型的图像分割方法,将污渍识别出来。在另一个示例中,可以通过图像处理的方法,获取图像的特征信息如纹理信息、灰度值信息,将所述特征信息与预设的特征数据进行比对,以识别污渍。需要说明的是,所述识别污渍方式不限于上述举例,例如,基于图像的颜色信息图像分割 技术,所属领域技术人员在本申请技术精髓的启示下,还可能做出其它变更,但只要其实现的功能和效果与本申请相同或相似,均应涵盖于本申请保护范围内。
在另一个示例中,所述第二功能模式可以包括避障模式,具体的,当污渍的高度大于预设高度的时候,表示所述污渍可能为障碍物。此时,控制所述机器人执行预设的避障动作。所述预设的避障动作可以包括当检测到障碍物的位置方向时,如,若所述障碍物位于所述自动行走设备的前进方向,可以控制所述自动行走设备按照预设的方向转弯(向左转弯或向右转弯)并前行预设距离,接下来按照与之前相反的方向(如之前遇到障碍物后左转,那么此时就右转;如之前遇到障碍物后右转,那么此时就左转)转弯并前行,以绕过障碍物。所述预设的避障措施还可以包括:若所述障碍物位于所述自动行走设备的前进方向,控制所述自动行走设备按照预设方向转弯。需要说明的是,所述第二功能模式的设置不限于上述举例,例如所述第二功能模式还可以包括机器人导航、定位等,所属领域技术人员在本申请技术精髓的启示下,还可能做出其它变更,但只要其实现的功能和效果与本申请相同或相似,均应涵盖于本申请保护范围内。
需要说明的是,在一个示例中,在视觉传感器可调的过程中,可以具有不同的功能;例如视觉传感器斜向上拍摄,视觉传感器则可用于定位和房间识别,诸如获取更多地房间属性特征,使得清洁机器人具有房间识别模式或房间识别功能;而视觉传感器水平拍摄,视觉传感器可用于定位和物体识别,诸如获取更多清洁机器人移动过程中的环境信息,例如识别清洁机器人移动过程中遇到的障碍物,使得清洁机器人具有避障模式或避障功能;在视觉传感器斜向下拍摄时,视觉传感器则可用于定位和污渍识别,诸如获取更多待清洁地面的信息,使得清洁机器人具有污染物(如污渍)识别模式或污染物识别功能。
本公开实施例中,所述第一功能模式和第二功能模式之间一般不进行反复切换,因此,视觉传感器的拍摄角度不会在两个较大的范围内进行反复调整。但同属于第一功能模式或第二功能模式的情况下,视觉传感器的拍摄角度可以较小幅度的调整,在一个示例中,例如,机器人在检测到污渍的情况下,随着机器人的行走,视觉传感器可以随污渍的方向做一些调整,以使得污渍占拍摄图像的中心位置,或污渍在拍摄图像中呈现较大的占比。当清洁机器人视觉传感器的拍摄方向为水平方向或斜向下方向时,可以先打开污渍识别的功能,利用污渍识别的方法识别污渍,在确认有污渍的情况下,再对污渍的高度进行判断,以进行障碍物的识别的功能。
在一种可能的实现方式中,所述视觉传感器的拍摄方向为水平方向,所述清洁机器人具有避障模式;所述视觉传感器的拍摄方向斜向下方向,所述清洁机器人具有污渍识别模 式。本公开实施例中,所述视觉传感器的拍摄方向为水平方向时,能够获得较多的障碍物的特征,如形状、大小,更有利于避障模式的实现。当所述视觉传感器的拍摄方向为斜向下方向时,可以获取更多的工作表面信息,有利于污渍识别。
在一种可能的实现方式中,所述机器人主体上还设置有补光模块,用于增强视觉传感器拍摄的图像数据中目标物体与背景图像的对比度。
本公开实施例中,所述目标物体可以包括障碍物、污渍、家具等,所述背景图像包括图像数据中除所述目标物体以外的图像区域。所述补光模块的发射方向可以和视觉传感器的拍摄方向不同,通过补光模块向障碍物发射光波,这里以目标物体为污渍为例进行说明,将污渍表面的漫反射凸显出来,增强了污渍与背景图像的对比度,使得后期污渍的识别更加准确。在一个示例中,所述补光模块可以包括LED灯珠、反光器等。
例如,在光线暗的情况下,可选择的打开补光模块,进行补光,辅助拍摄目标物体;需要说明的是,补光模块的光源设计在清洁机器人前方的多个位置(例如左前方、前部中间和右前方),从多个不同位置发射光束,对污渍识别的结果不一样,从而进一步提升污渍与背景的对比度,提升识别效果。
图4是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。参考图4所示,所述方法用于清洁机器人,包括以下步骤:
步骤S401,调节所述视觉传感器斜向上拍摄,使得所述清洁机器人具有第一功能模式;
步骤S402,和/或,调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式。
本公开实施例中,所述视觉传感器包含的内容、第一功能模式和第二功能模式的具体内容,以及调节视觉传感器的具体方法,已在上述实施例中进行了阐述,在这里不再赘述。在一个示例中,图1是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的应用场景图。图2是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。参考图1和图2所示,清洁机器人100上安装有可调节拍摄角度的视觉传感器。清洁机器人100在创建工作地图时,将所述视觉传感器的拍摄角度调整为斜向上方向,以获得更多的房间属性信息,所述房间属性用于表示房间的用途,例如卧室、客厅和厨房等。所述工作地图用于限制清洁机器人100的工作区域,所述清洁机器人100在所述工作区域进行行驶,并清洁所述工作区域的表面。清洁机器人完成建图后,正常清洁工作(如拖地),在正常清洁过程中,清洁机器人可以同时执行避障/污渍识别功能和物体识别功能,例如 当视觉传感器为一颗时,调节该视觉传感器略向下倾斜,从而实现上述两种功能;这里水平略向下倾斜例如可以是在视觉传感器(如相机)的拍摄方向在水平线下且与水平线角度在1-15度的范围内;或,当视觉传感器为两颗时,可以将视觉传感器略向下倾斜,从而实现上述两种功能;也可以将其中一个视觉传感器平视,实现避障/污渍识别功能,而将另一个视觉传感器略向下倾斜或者向下倾斜约45度,以实现污渍识别功能;当然,也可以将其中一个视觉传感器略向下倾斜,实现避障/污渍识别功能,而将另一个视觉传感器向下倾斜约45度,以实现污渍识别功能;清洁机器人100在工作的过程中,将视觉传感器的拍摄角度调整为斜向下方向,利用所述视觉传感器检测到污渍,并对污渍的高度进行判断,若污渍的高度小于或等于预设阈值,例如:第一污渍102和第二污渍103的高度满足小于或等于预设阈值,则进一步对污渍的面积进行判断。若污渍的面积占比较大,例如第一污渍102,则控制所述清洁机器人采用较大的清洁力度清洁所述第一污渍;若污渍的面积占比比较小,例如第二污渍103,则控制所述清洁机器人采用较小的清洁力度清洁所述第二污渍103。若污渍的高度大于预设阈值,例如图1中的小球104,则控制所述清洁机器人执行预设的避障动作避开所述小球104。本公开实施例中清洁机器人能够根据污渍的污染程度确定对应的清洁策略,能获得较好的清洁效果。
本公开实施例具有的有益效果包括:清洁机器人启动工作以后,将视觉传感器如相机斜向上拍摄,识别房间属性,进行房间识别,辅助建图。清洁机器人将相机的拍摄角度调整为水平或斜向下拍摄,进行污渍的识别、障碍物的识别等工作,执行对应的拖地策略或避障策略,其中污渍的识别和障碍物的识别同时进行。本公开通过一套(例如可以是一个或一组)视觉传感器,便能够实现清洁机器人的多个功能,能够节省清洁机器人的成本。
在一种可能的实现方式中,所述步骤S401包括:
将所述视觉传感器调节为斜向上拍摄,以获取清洁机器人工作区域周围环境的图像数据;
根据所述图像数据,对所述工作区域进行房间识别。
本公开实施例中,将所述视觉传感器调节为斜向上拍摄,从而获取清洁机器人工作区域周围的图像数据,对所述图像数据进行图像特征提取,以识别工作区域周围环境的物体,比如桌椅、沙发、床、柜子、洗衣机等,以识别不同的房间属性。比如客厅内多设置沙发、桌椅等,卧室内多设置床和柜子等、洗手间多设置洗衣机、厨房多设置冰箱等。在一个示例中,所述图像特征提取的方法可以包括:利用方向梯度直方图(Histogram of Oriented Gradient,HOG)的特征提取方法,提取图像的形状边缘,将所述形状边缘与预设 的物体形状相匹配,以识别所述物体。还可以包括利用局部二值模式(Local Binary Pattern,LBP)的特征提取方法,提取图像的纹理特征,还可以提取图像的颜色特征,例如颜色相关图,将所纹理特征或颜色相关图与参照物进行比较,以识别所述物体。
本公开实施例,可以利用所述视觉传感器斜向上拍摄,已确定墙面或家居距离清洁机器人的位置,控制所述清洁机器人沿所述墙面或家居边沿行驶,进行房间识别,以实现辅助建图任务。
在一种可能的实现方式中,图3是根据一示例性实施例示出的一种清洁机器人的清洁控制方法的流程图。参考图3所示,所述步骤S402,调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式,包括:
将所述视觉传感器调整为水平或斜向下拍摄,以检测所述视觉传感器检测范围内的物体;
在确定所述视觉传感器检测范围内存在污渍的情况下,检测所述污渍的高度信息;
在所述污渍的高度小于或等于预设高度的情况下,获取所述污渍的污染程度并控制所述清洁机器人按照与所述污渍程度相匹配的清洁策略清理所述污渍;
在所述污渍的高度大于预设高度的情况下,控制所述清洁机器人执行预设的避障动作。
在一示例中,在确定所述视觉传感器检测范围内不存在污渍的情况下,控制所述清洁机器人执行第一清洁策略;和/或,在利用图像的信息熵或灰度特征等对污渍的污染程度检验,且检验结果表明污染程度轻或者未见异常(如对污染程度产生误判)时,控制所述清洁机器人执行第一清洁策略。本公开实施例中,清洁机器人在进行清洁工作时,将所述视觉传感器调整为水平或斜向下拍摄,以检测所述视觉传感器检测范围内的物体,利用视觉传感器识别污染物(如污渍),换言之,清洁机器人在进行清洁工作时,将所述视觉传感器调整为水平或斜向下拍摄,通过视觉传感器检测其视野范围内的物体,然后判断物体是否为污染物(如污渍)。以污染物为污渍为例,如果识别到污渍,则对污渍的高度进行判断,判断污渍的高度是否大于预设高度,如果污渍的高度大于预设高度,则控制清洁机器人执行避障策略。如果污渍的高度小于预设高度,则控制清洁机器人对污渍的污染程度进行判断,或者说,控制清洁机器人对污渍量(如污渍面积)进行判断,如果污渍量少,则执行第二清洁策略,如果污渍量多,则执行第三清洁策略;其中,本公开实施例中所述避障策略已在上述实施例中进行了说明,在这里不再阐述。
需要说明的是,上述的清洁机器人可以是扫地机器人、拖地机器人或扫拖一体机器 人,以清洁机器人具有拖地功能的拖地机器人或者扫拖一体机器人为例,第一清洁策略对应第一拖地策略,例如是轻度清洁策略,相应的拖地次数例如可以是1-2次;第二清洁策略对应第二拖地策略,例如是中度清洁策略,相应的拖地次数例如可以是4-6次;第三清洁策略对应第三拖地策略,例如是重度清洁策略,相应的拖地次数例如可以是8-12次。
本公开实施例中,在所述在确定所述视觉传感器检测范围内存在污渍的情况下,其中污染物识别的方法被设置成按照如下方式获得:
利用所述视觉传感器获取所述检测范围内的图像数据;
将所述图像数据输入污渍识别模型,经所述污渍识别模型输出污渍的识别结果,其中,所述污渍识别模型被设置为利用图像数据与污渍识别结果之间的对应关系训练获得。
本公开实施例中,所述污渍识别模型被设置为利用图像数据与污渍识别结果之间的对应关系训练获得可以包括:预先采集包含有污渍的图像样本集合,为了提高训练结果的鲁棒性,所述图像样本可以是在不同的光照、角度焦距条件下的图像。对所述污渍的图像样本进行标注,标注出污渍的轮廓。所述污渍识别模型可以包括基于卷积神经网络的模型,构建所述卷积神经网络模型,所述卷积神经网络模型中设置有训练参数,分别将所述污渍的图像样本输入至所述污渍识别模型,生成预测结果。所述预测结果可以包括:图像中的污渍对应的像素位置的预测结果为a,图像中非污渍对应的像素位置的预测结果为b,其中a、b的值可以预先设置,在一个示例中,可以将a设置为1,b设置为0。根据所述预测结果与标注好污渍的图像样本之间的差异,对所述训练参数进行迭代调整,直至所述差异满足预设要求。
本公开实施例中,可以对不同的工作表面材质,进行分开训练模型。所述工作表面材质可以包括木地板、瓷砖、大理石等。分开训练可以降低模型训练的复杂度,并提高输出结果的准确性。在进行污渍识别之前先判断当前工作表面材质,再确定对应的污渍识别模型,以减少误判,提高污渍识别精度。
在一种可能的实现方式中,所述将所述图像数据输入污渍识别模型,经所述污渍识别模型输出污渍识别结果,包括:
将从多个拍摄位置所捕捉的所述检测范围内的多个图像数据分别输入污渍识别模型,经所述污渍识别模型,输出对所述检测范围内的污渍的多个识别结果;
将相同识别结果的数量大于预设值的识别结果作为污渍的识别结果。
本公开实施例中,为了提高识别结果的准确性,可以在机器人行进过程中,从多个拍摄位置所捕捉的所述检测范围内的多个图像数据输入污渍识别模型中,输出对所述检测 范围内污渍的识别结果,将相同识别结果的数量占比较大的作为污渍的识别结果。在一个示例中,例如识别到5张图像,当大于等于4张识别结果为有污渍时,则判定识别结果为有污渍。
本公开实施例中将所述图像数据输入上述预先训练好的污渍识别模型,经所述污渍识别模型输出污渍的识别结果,所述识别结果可以包括没有污渍、有污渍并标记污渍的位置区域。采用预先训练好的污渍模型识别污渍的方式,能够较为迅速的确定清洁机器人的前方是否存在污渍,适用于前期污渍的筛选。
本公开实施例中,所述在所述污渍的高度小于或等于预设高度的情况下,获取所述污渍的污染程度,其中污染程度的获得方式可以包括:
获取所述污渍的识别结果;
根据所述识别结果中污渍的面积占比与污染程度的关联关系,确定所述污染程度。
本公开实施例中,所述污渍的面积占比包括同一检测范围中多个污渍的总面积之和占检测区域的比值。通过污渍识别模型输出的识别结果可以包括图像中的污渍对应的像素位置的预测结果为a,图像中非污渍对应的像素位置的预测结果为b,其中a、b的值可以预先设置,在一个示例中,可以将a设置为1,b设置为0。因此,可以预测结果为a的数据量与总的预测结果数据量的占比可以反映污渍的污染面积的大小。在一个示例中,建立污渍的图像像素占比与污染程度的关联关系。根据识别结果中,污渍图像像素的占比,确定与所述占比相对应的污染程度。
本公开实施例,可以基于污渍识别模型输出的识别结果数据,无需对所述识别结果数据做更复杂的计算,便可以确定污渍的面积占比。
本公开实施例中,本公开实施例中所述污渍的污染程度,可以利用污渍的外观特征进行表述,如污渍的面积大小、污渍的颜色深度、污渍的高度或厚度等。一般情况下,污渍面积越大、污渍的颜色越深、污渍的高度或厚度越大,表示污染程度越大。
本公开实施例中,所述获取所述污渍的污染程度可以通过下述方式实现。在一个示例中,可以建立污染程度等级与污渍面积之间的对应关系,通过获取污渍实际面积或图像像素占比,确定与所述污渍面积相匹配的污染程度。在另一个示例中,可以建立污染程度等级与污渍的颜色亮度、色度和/或饱和度之间的对应关系,通过获取污渍的颜色信息,确定与所述颜色信息相匹配的污染程度。在另一个示例中,还可以建立污染程度等级与污渍高度的对应关系,通过获取污渍的高度,确定与所述高度相匹配的污染程度。需要说明的是,所述污渍污染程度的方式不限于上述举例,例如,利用图像的灰度特征确定污渍污染 程度,所属领域技术人员在本申请技术精髓的启示下,还可能做出其它变更,但只要其实现的功能和效果与本申请相同或相似,均应涵盖于本申请保护范围内。
本公开实施例中,所述清洁策略可以包括清洁方式和清洁力度,所述清洁方式可以包括拖地、扫地或吸尘等,所述清洁力度可以包括轻度清洁、中度清洁和重度清洁等。在一个示例中,所述轻度清洁例如拖地1-2遍;所述中度清洁例如拖地4-6遍;所述重度清洁例如拖地8-12遍。在另一个示例中,所述轻度清洁例如扫地,所述中度清洁例如扫地加吸尘,所述重度清洁例如扫地加吸尘加拖地。需要说明的是,所述清洁策略不限于上述举例,例如,还可以根据多种清洁方式、清洁次数或清洁时间的组合设置,所属领域技术人员在本申请技术精髓的启示下,还可能做出其它变更,但只要其实现的功能和效果与本申请相同或相似,均应涵盖于本申请保护范围内。
本公开实施例中,污渍程度与清洁策略的匹配关系可以预先设置。所述控制所述清洁机器人按照与所述污渍程度相匹配的清洁策略清理所述污渍,包括控制所述清洁机器人按照与所述污渍程度相匹配的清洁策略清理所述污渍本身,还可以包括清理污渍预设范围内的区域,本申请不做限制。
区别于传统的按照固定的清洁频率清洁工作表面,本公开通过控制所述清洁机器人按照与所述污渍相匹配的清洁策略清理所述污渍,对于轻度污染的污渍采用轻度清洁的方式,对于重度污染的污渍采用重度清洁的方式,能够有效的清洁地面污渍,节省机器人的能源。
在一种可能的实现方式中,所述方法还包括:
在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验。
本公开实施例中,所述对所述污渍的污染程度进行检验,在一个示例中所述在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验可以包括:
利用所述视觉传感器获取所述检测范围内的图像数据;
获取所述图像数据的信息熵;
根据信息熵与污染程度的预设关联关系,确定与所述信息熵相匹配的污染程度。
本公开实施例中,考虑到图像的信息熵能够表征图像的内容,例如,一张全白的图像,其信息熵为0,例如一张全黑的图像,其信息熵同样为0。而对于黑白相间的图像,其信息熵则不再为0了,对于黑白相间较密的图像,其信息熵的值会增加。因此,当清洁机器人行驶在空荡的地面时,其捕捉到的图像的信息熵是比较小的,若遇到污渍时,其捕捉到的图像的信息熵是比较大的,并且污渍的面积越大,污染程度越严重,获取的图像信息熵 就越大。本公开可以通过现有技术中计算图像信息熵的方法,计算所述视觉传感器获取所述检测范围内的图像数据的信息熵,本公开不再赘述。在所述信息熵大于预设值的时候,则所述污渍的污染程度较为严重,在所述信息熵小于或等于预设值的时候,则所述污渍的污染程度较轻。
在另一个示例中,所述检验的方法还可以包括:利用污渍的外观参数对污染程度进行检验。所述外观参数可以包括污渍的厚度、污渍的面积、污渍的纹理和污渍的颜色等。
所述在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验可以包括:
利用所述视觉传感器获取所述检测范围内的图像数据;
从所述图像数据中提取所述污染物的外观参数,将所述外观参数与参照值做比较以确定所述污染物的污染程度。
其中,所述外观参数可以包括:污染物的灰度、污染物的纹理和污染物的颜色。在一个示例中,可以根据上述实施例中任一种图形特征提取的方法,获取所述外观参数,将所述外观参数与参照值做比较,若所述外观参数值大于参照值,则所述污渍的污染程度较为严重,若所述外观参数值小于参照值,则所述污渍的污染程度较轻。
在一个示例中所述在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验还可以包括:结合地图信息,对污染物的污染程度进行检验,例如,清洁机器人根据地图信息,确定其所在位置,例如所在位置包括具有纹理的地面时,例如具有大理石的门槛附近,则可能由大理石造成的误检。通过此方法可以对污染物的污染程度进行检验。
本公开实施例中,在污渍的污染程度较重的情况下,可以利用上述实施例中任一种方法对所述污染程度进行检验。以避免清洁机器人在误检的情况下对污渍进行深度清洁,从而影响工作效率。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (10)

  1. 一种清洁机器人,其特征在于,包括:
    机器人主体,
    运动模块,设置于所述机器人主体上,配置为支撑所述机器人主体并带动所述清洁机器人运动;
    清洁模块,设置于所述机器人主体上,配置为对工作表面进行清洁;
    视觉传感器,设置于所述机器人主体上,用于拍摄图像;
    控制模块,设置于所述机器人主体上,配置为调节所述视觉传感器的拍摄方向,使得所述清洁机器人具有至少两种不同的功能模式。
  2. 根据权利要求1所述的清洁机器人,其特征在于,
    所述视觉传感器的拍摄方向为斜向上方向,所述清洁机器人具有第一功能模式;
    所述视觉传感器的拍摄方向为水平方向或斜向下方向,所述清洁机器人具有第二功能模式。
  3. 根据权利要求2所述的清洁机器人,其特征在于,
    所述第一功能模式为房间识别模式;
    所述第二功能模式为避障模式和/或污渍识别模式。
  4. 根据权利要求3所述的清洁机器人,其特征在于,
    所述视觉传感器的拍摄方向为水平方向,所述清洁机器人具有避障模式;
    所述视觉传感器的拍摄方向斜向下方向,所述清洁机器人具有污渍识别模式。
  5. 根据权利要求1-4任意一项所述的清洁机器人,其特征在于,所述机器人主体上还设置有补光模块,用于增强视觉传感器拍摄的图像数据中目标物体与背景图像的对比度。
  6. 一种清洁机器人的清洁控制方法,其特征在于,所述清洁机器人上设置有拍摄方向可调的视觉传感器,包括:
    调节所述视觉传感器斜向上拍摄,使得所述清洁机器人具有第一功能模式;
    和/或,调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式。
  7. 根据权利要求6所述的方法,其特征在于,调节所述视觉传感器斜向上拍摄,使得所述清洁机器人具有第一功能模式的步骤,包括:
    将所述视觉传感器调节为斜向上拍摄,以获取清洁机器人工作区域周围环境的图像数据;
    根据所述图像数据,对所述工作区域进行房间识别。
  8. 根据权利要求6所述的方法,其特征在于,所述调节所述视觉传感器水平或斜向下拍摄,使得所述清洁机器人具有第二功能模式的步骤,包括:
    将所述视觉传感器调整为水平或斜向下拍摄,以检测所述视觉传感器检测范围内的物体;
    在确定所述视觉传感器检测范围内存在污渍的情况下,检测所述污渍的高度信息;
    在所述污渍的高度小于或等于预设高度的情况下,获取所述污渍的污染程度并控制所述清洁机器人按照与所述污渍程度相匹配的清洁策略清理所述污渍;
    在所述污渍的高度大于预设高度的情况下,控制所述清洁机器人执行预设的避障动作。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验。
  10. 根据权利要求9所述的方法,其特征在于,所述在所述污染程度大于预设阈值的情况下,对所述污渍的污染程度进行检验,包括:
    利用所述视觉传感器获取所述检测范围内的图像数据;
    从所述图像数据中提取所述污渍的外观参数,将所述外观参数与参照值做比较以确定所述污渍的污染程度;
    和/或,
    利用所述视觉传感器获取所述检测范围内的图像数据;
    获取所述图像数据的信息熵;
    根据信息熵与污染程度的预设关联关系,确定与所述信息熵相匹配的污染程度。
PCT/CN2021/141053 2020-12-25 2021-12-24 一种清洁机器人及其清洁控制方法 WO2022135556A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180014614.3A CN115151174A (zh) 2020-12-25 2021-12-24 一种清洁机器人及其清洁控制方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011566034.9A CN114680732A (zh) 2020-12-25 2020-12-25 一种清洁机器人及其清洁控制方法
CN202011566034.9 2020-12-25

Publications (1)

Publication Number Publication Date
WO2022135556A1 true WO2022135556A1 (zh) 2022-06-30

Family

ID=82130417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141053 WO2022135556A1 (zh) 2020-12-25 2021-12-24 一种清洁机器人及其清洁控制方法

Country Status (2)

Country Link
CN (2) CN114680732A (zh)
WO (1) WO2022135556A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116327054A (zh) * 2023-04-19 2023-06-27 追觅创新科技(苏州)有限公司 洗地机的清洗控制方法和洗地机

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114680732A (zh) * 2020-12-25 2022-07-01 苏州宝时得电动工具有限公司 一种清洁机器人及其清洁控制方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070100501A1 (en) * 2005-10-27 2007-05-03 Lg Electronics Inc. Apparatus and method for controlling camera of robot cleaner
US20120019627A1 (en) * 2009-03-31 2012-01-26 Choi Yoo-Jin Mobile robot with single camera and method for recognizing 3d surroundings of the same
CN104407610A (zh) * 2014-07-21 2015-03-11 东莞市万锦电子科技有限公司 地面清洁机器人系统及其控制方法
US20180354132A1 (en) * 2017-06-09 2018-12-13 Lg Electronics Inc. Moving robot and control method thereof
CN209360580U (zh) * 2018-11-07 2019-09-10 深圳市沃普德科技有限公司 一种智能机器人摄像头模组
US10545497B1 (en) * 2019-01-04 2020-01-28 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and device for mobile robot, mobile robot
US20200331148A1 (en) * 2018-01-24 2020-10-22 Qfeeltech (Beijing) Co., Ltd. Cleaning robot
CN113163125A (zh) * 2020-01-03 2021-07-23 苏州宝时得电动工具有限公司 自移动设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103194991B (zh) * 2013-04-03 2016-01-13 西安电子科技大学 智能机器人道路清理系统及清理方法
KR102093177B1 (ko) * 2013-10-31 2020-03-25 엘지전자 주식회사 이동 로봇 및 그 동작방법
CN106569489A (zh) * 2015-10-13 2017-04-19 录可系统公司 具有视觉导航功能的扫地机器人及其导航方法
WO2017200349A1 (ko) * 2016-05-20 2017-11-23 엘지전자 주식회사 로봇 청소기
CN108113595A (zh) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 一种节能清扫机器人系统、方法及机器人
CN207424680U (zh) * 2017-11-20 2018-05-29 珊口(上海)智能科技有限公司 移动机器人
CN110160543A (zh) * 2019-04-22 2019-08-23 广东工业大学 实时定位和地图构建的机器人
CN110558902B (zh) * 2019-09-12 2021-12-17 炬佑智能科技(苏州)有限公司 可移动机器人及其特定物检测方法、装置与电子设备
CN111166247B (zh) * 2019-12-31 2022-06-07 深圳飞科机器人有限公司 垃圾分类处理方法及清洁机器人
CN112101378A (zh) * 2020-08-20 2020-12-18 上海姜歌机器人有限公司 机器人重定位方法、装置及设备
CN114680732A (zh) * 2020-12-25 2022-07-01 苏州宝时得电动工具有限公司 一种清洁机器人及其清洁控制方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070100501A1 (en) * 2005-10-27 2007-05-03 Lg Electronics Inc. Apparatus and method for controlling camera of robot cleaner
US20120019627A1 (en) * 2009-03-31 2012-01-26 Choi Yoo-Jin Mobile robot with single camera and method for recognizing 3d surroundings of the same
CN104407610A (zh) * 2014-07-21 2015-03-11 东莞市万锦电子科技有限公司 地面清洁机器人系统及其控制方法
US20180354132A1 (en) * 2017-06-09 2018-12-13 Lg Electronics Inc. Moving robot and control method thereof
US20200331148A1 (en) * 2018-01-24 2020-10-22 Qfeeltech (Beijing) Co., Ltd. Cleaning robot
CN209360580U (zh) * 2018-11-07 2019-09-10 深圳市沃普德科技有限公司 一种智能机器人摄像头模组
US10545497B1 (en) * 2019-01-04 2020-01-28 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and device for mobile robot, mobile robot
CN113163125A (zh) * 2020-01-03 2021-07-23 苏州宝时得电动工具有限公司 自移动设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116327054A (zh) * 2023-04-19 2023-06-27 追觅创新科技(苏州)有限公司 洗地机的清洗控制方法和洗地机

Also Published As

Publication number Publication date
CN115151174A (zh) 2022-10-04
CN114680732A (zh) 2022-07-01

Similar Documents

Publication Publication Date Title
US10705535B2 (en) Systems and methods for performing simultaneous localization and mapping using machine vision systems
US11042760B2 (en) Mobile robot, control method and control system thereof
US10611023B2 (en) Systems and methods for performing occlusion detection
WO2022135556A1 (zh) 一种清洁机器人及其清洁控制方法
US10545497B1 (en) Control method and device for mobile robot, mobile robot
TWI653022B (zh) Autonomous mobile body
JP6680453B2 (ja) 走行式掃除機並びにこのような装置の運転方法
CN104487864B (zh) 机器人定位系统
WO2019007038A1 (zh) 扫地机器人、扫地机器人系统及其工作方法
US20190254490A1 (en) Vacuum cleaner and travel control method thereof
TWI664948B (zh) Electric sweeper
CN110325938B (zh) 电动吸尘器
CN212234313U (zh) 自主清洁器
KR102203439B1 (ko) 이동 로봇 및 이동 로봇의 제어방법
TWI726031B (zh) 電動掃除機
KR20140133369A (ko) 청소 로봇 및 그 제어방법
KR20220025250A (ko) 로봇 청소 장치 전방의 표면의 레벨차를 검출하는 방법
US20200033878A1 (en) Vacuum cleaner
Palacín et al. Measuring coverage performances of a floor cleaning mobile robot using a vision system
KR102348963B1 (ko) 로봇 청소기 및 그 제어 방법
JP2020087240A (ja) 掃除機の制御システム、自律走行型掃除機、掃除システム、および掃除機の制御方法
US20220183525A1 (en) Robot
KR20220012001A (ko) 로봇 청소기 및 이의 제어방법
KR102390039B1 (ko) 로봇 청소기 및 그 제어 방법
CN117837987A (zh) 控制方法、清洁机器人及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21909551

Country of ref document: EP

Kind code of ref document: A1