CN117257170A - Cleaning method, cleaning display method, cleaning apparatus, and storage medium - Google Patents

Cleaning method, cleaning display method, cleaning apparatus, and storage medium Download PDF

Info

Publication number
CN117257170A
CN117257170A CN202311283626.3A CN202311283626A CN117257170A CN 117257170 A CN117257170 A CN 117257170A CN 202311283626 A CN202311283626 A CN 202311283626A CN 117257170 A CN117257170 A CN 117257170A
Authority
CN
China
Prior art keywords
target
cleaning
camera
point cloud
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311283626.3A
Other languages
Chinese (zh)
Inventor
张天亮
竺浩
宋昱慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen 3irobotix Co Ltd
Original Assignee
Shenzhen 3irobotix Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen 3irobotix Co Ltd filed Critical Shenzhen 3irobotix Co Ltd
Priority to CN202311283626.3A priority Critical patent/CN117257170A/en
Publication of CN117257170A publication Critical patent/CN117257170A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4091Storing or parking devices, arrangements therefor; Means allowing transport of the machine when it is not being used

Landscapes

  • Electric Vacuum Cleaner (AREA)

Abstract

The application discloses a cleaning method, a cleaning presentation method, a cleaning apparatus, a computer program product, and a non-volatile computer readable storage medium. The cleaning method comprises the steps of adjusting the posture of a machine body under the condition that voice calling information is received so that a camera faces a target direction, and performing sound source positioning on the voice calling information according to the target direction; identifying a target object in a scene image shot by a camera, and determining a target area of the target object in the scene image, wherein the target object is a sound source object which sends out voice calling information; acquiring a target point cloud set corresponding to a target area in point cloud information acquired by a radar based on a target area and a preset calibration relation between a camera and the radar; and determining the target position of the target area according to the target point cloud set, and moving to the target position for cleaning. The target position of the sweeping robot relative to the user position is determined through the camera and the radar, and the target position is cleaned, so that the cleaning efficiency can be improved.

Description

Cleaning method, cleaning display method, cleaning apparatus, and storage medium
Technical Field
The present application relates to the field of intelligent cleaning technology, and more particularly, to a cleaning method, a cleaning presentation method, a cleaning device, a computer program product, and a non-volatile computer readable storage medium.
Background
Along with the acceleration of the life rhythm of people, the role played by the sweeping robot in real life is more and more important, and the traditional control mode of the sweeping robot requires a user to control by using a remote controller (such as an application program of the sweeping robot installed on a mobile phone and manually planning a cleaning area on the application program), so that when cleaning a position to be cleaned, the manual planning operation is complicated, and the cleaning efficiency is low.
Disclosure of Invention
Embodiments of the present application provide a cleaning method, a cleaning presentation method, a cleaning apparatus, a computer program product, and a non-transitory computer readable storage medium.
The cleaning method of the embodiment of the application is applied to cleaning equipment, the cleaning equipment comprises a machine body, a camera and a radar, wherein the camera and the radar are arranged on the machine body, the cleaning method comprises the steps of adjusting the posture of the machine body to enable the camera to face a target direction under the condition that voice calling information is received, and the target direction is obtained by carrying out sound source positioning on the voice calling information; identifying a target object in a scene image shot by the camera, and determining a target area of the target object in the scene image, wherein the target object is a sound source object which sends out the voice calling information; acquiring a target point cloud set corresponding to the target area in the point cloud information acquired by the radar based on the target area and a preset calibration relation between the camera and the radar; and determining the target position of the target area according to the target point cloud set, and moving to the target position for cleaning.
In some embodiments, the cleaning device further comprises an audio collection assembly comprising a plurality of audio collection components for collecting the voice call information, the method further comprising: performing sound source localization according to the time when each audio acquisition component acquires the voice calling information so as to determine the target direction in which the sound source object is positioned and the estimated distance between the sound source object and the cleaning equipment; breaking whether the estimated distance is larger than a preset threshold value; if yes, entering the step of identifying the target object in the scene image shot by the camera; if not, moving a target distance away from the target object along the target direction so that the distance between the sound source object and the cleaning device is larger than the preset threshold value, and entering the step of identifying the target object in the scene image shot by the camera after the movement is completed; the target distance is determined according to the estimated distance and the preset threshold value.
Therefore, the audio acquisition component on the cleaning equipment body is used for receiving the voice calling information and performing sound source positioning, the direction of the user relative to the direction opposite to the self-moving machine and the estimated distance between the user and the cleaning equipment can be determined, and the posture of the cleaning equipment to be adjusted can be determined, so that the camera faces the user and the approximate distance between the user and the cleaning equipment. And then judging the approximate distance between the user and the cleaning equipment, and moving the cleaning equipment away from the user under the condition that the cleaning equipment is too close to the user, so that the camera of the cleaning equipment can shoot the whole body of the user as much as possible, and the follow-up human body identification is convenient.
In some embodiments, before the step of acquiring the point cloud information acquired by the radar based on the target area and the preset calibration relation between the camera and the radar, the cleaning method further includes adjusting the posture of the airframe so that the center of the target area is located at a preset position in the scene image.
Therefore, the direction of the camera is generally right in front of the sweeping robot, and the target area is located at the horizontal middle position of the scene image (for example, the horizontal middle position can be the middle position in the horizontal direction of the scene image) by adjusting the posture of the body, so that the right in front of the sweeping robot is aligned with the user, and the sweeping robot can move to the position where the user is located only by directly walking.
In some embodiments, the determining the target location of the target region from the target point cloud set comprises: determining a distance value between each point cloud and the cleaning equipment in the target point cloud set; and determining target values in the distance values, and determining target point clouds corresponding to the target values, wherein the target positions are positions corresponding to the target point clouds, and the target values comprise any one of a minimum value, a maximum value, an average value and a median value.
In this way, the distance value between each point cloud and the cleaning device is determined through the point cloud set, and the target point cloud corresponding to any one of the minimum value, the maximum value, the average value and the median value of the plurality of distance values is determined, wherein the position corresponding to the target point cloud is the position between the user and the cleaning device, so that the position between the user and the cleaning device can be accurately determined. And under the condition that the target area is a rectangular frame for framing a human body, the target point cloud is determined by using the median value of the plurality of distance values, so that the target point cloud is at least the point cloud on the human body, and the position determination accuracy between the user and the cleaning equipment is ensured. And under the condition that the target area is an area surrounded by the outline of the human body, any one of the minimum value, the maximum value and the median value of the plurality of distance values is used for determining the target point cloud, so that the target point cloud is at least the point cloud on the human body, and the position determination accuracy between the user and the cleaning equipment is ensured.
In some embodiments, the voice call information includes a normal cleaning mode, and the method further includes planning an area to be cleaned according to the target location when the cleaning mode corresponding to the voice call information is the normal cleaning mode; a cleaning path is planned in the area to be cleaned and moved along the cleaning path to clean the area to be cleaned.
In this way, by setting the cleaning apparatus to the normal cleaning mode, the user position determined from the position where the voice is uttered can be determined, thereby determining the area to be cleaned, and also the cleaning path can be determined.
In some embodiments, the voice call information includes a follow-up cleaning mode, and the method further includes following the target object according to the target position corresponding to the target area of the scene images for consecutive frames when the cleaning mode corresponding to the voice call information is the follow-up cleaning mode, so as to clean the area corresponding to the moving track of the target object.
Therefore, when the cleaning device is set to the following cleaning mode, the cleaning device can follow the scene of cleaning by the user, and compared with the common cleaning mode, under the condition that a plurality of dirt exists in the scene, the voice needs to be sent out for many times, the area where each dirt is located is cleaned, the operation is complex, the user experience is poor, in the following cleaning mode, the user only needs to send out the voice once, the cleaning device can follow the area where each dirt is located by the user, and the operation is simple.
The cleaning display method comprises the steps of displaying the current position and the current gesture of the cleaning equipment in a map and a target object sending out voice calling information, wherein the cleaning equipment faces the target object, and the current position and the current gesture are determined according to the current gesture information of the cleaning equipment; and displaying a real-time moving path of the cleaning equipment from the current position to a target position corresponding to the target object.
Thus, by displaying the pose of the cleaning device on the map, the position of the user, the orientation of the cleaning device relative to the user and the moving path on the display screen, the respective positions of the cleaning device and the user, the pose of the cleaning device and the relative positions of the cleaning device and the user can be clearly known through the display screen.
The cleaning device of the present application embodiment includes a processor, a memory, and a computer program, wherein the computer program is stored in the memory and executed by the processor, and the computer program includes instructions for executing the cleaning method of any of the above embodiments.
The computer program product of an embodiment of the present application comprises a computer program comprising instructions for performing the cleaning presentation method of any of the embodiments described above.
The non-transitory computer readable storage medium of the present embodiments includes a computer program that, when executed by a processor, causes the processor to perform the cleaning method of any of the above embodiments and to perform the cleaning presentation method of any of the above embodiments.
According to the cleaning method, the cleaning display method, the cleaning device, the computer program product and the nonvolatile computer readable storage medium, under the condition that voice calling information which is sent by a user and needs to be cleaned is received by the cleaning device, sound source positioning is conducted, the direction of the user (namely a sound source) is determined, the cleaning device automatically adjusts the posture of the body, a camera installed on the body of the cleaning device faces the direction of the user sending the voice calling information, and a scene where the user is located is photographed through the camera, so that a scene image is obtained. And then, identifying the scene image, determining a target area of the user in the scene image, and acquiring a target point cloud set corresponding to the target area of the user in the scene image in the point cloud information acquired by the radar through a calibration relation preset by the radar and the camera arranged on the cleaning equipment body. And then determining the target position of the user through the point cloud set, moving the target position from the current position to the target position, and finally cleaning the target position. Compared with the sound of cleaning through the calling sent by the user, the cleaning device determines the position of the user so as to clean the position of the user, the positioning error is large because of the interference of the environmental noise, and the cleaning device can be accurately controlled to go to the position of the user for cleaning.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic illustration of an application scenario of a cleaning method according to certain embodiments of the present application;
FIG. 2 is a schematic plan view of a sweeping robot according to certain embodiments of the present application;
FIG. 3 is a schematic illustration of a scenario of a cleaning method of certain embodiments of the present application;
FIG. 4 is a schematic illustration of a scenario of a cleaning method of certain embodiments of the present application;
FIG. 5 is a schematic flow chart of a cleaning method according to certain embodiments of the present application;
FIG. 6 is a schematic flow chart of a cleaning method according to certain embodiments of the present application;
FIG. 7 is a flow diagram of a cleaning method according to certain embodiments of the present application;
FIG. 8 is a flow diagram of a cleaning method according to certain embodiments of the present application;
FIG. 9 is a flow diagram of a cleaning method according to certain embodiments of the present application;
FIG. 10 is a schematic illustration of a scenario of a cleaning method of certain embodiments of the present application;
FIG. 11 is a flow diagram of a cleaning method according to certain embodiments of the present application;
FIG. 12 is a schematic illustration of a scenario of a cleaning method of certain embodiments of the present application;
FIG. 13 is a flow diagram of a cleaning method according to certain embodiments of the present application;
FIG. 14 is a schematic block diagram of a cleaning device according to certain embodiments of the present application;
FIG. 15 is a block schematic diagram of a cleaning display device according to certain embodiments of the present application;
fig. 16 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
To facilitate an understanding of the present application, the following description of terms appearing in the present application will be provided:
the floor sweeping robot, also called automatic sweeping machine, intelligent dust collector, robot dust collector, etc., is one kind of intelligent household appliance and can complete floor cleaning automatically inside room via artificial intelligence. Generally, the brushing and vacuum modes are adopted, and the ground sundries are firstly absorbed into the garbage storage box of the ground, so that the function of cleaning the ground is completed. Generally, robots for performing cleaning, dust collection, and floor scrubbing are collectively referred to as floor cleaning robots.
The self-moving robot can be a dust collection robot, a sweeping/mopping/brushing/mopping robot and the like. For brevity, the present application describes a self-moving robot as an example of a sweeping robot, and the principle of the self-moving robot as other types of robots is similar, and will not be described herein again.
The conventional control manner of the sweeping robot 100 requires a user to control by using a remote controller (for example, an application program of the sweeping robot 100 is controlled by being installed on a mobile phone, and a cleaning area is manually planned on the application program), so that when cleaning a position to be cleaned, the manual planning operation is complicated, and the cleaning efficiency is low.
In order to solve the above technical problems, an embodiment of the present application provides a cleaning method.
An application scenario of the technical solution of the present application is described first, as shown in fig. 1, and the cleaning method provided in the present application may be applied to the application scenario shown in fig. 1. The cleaning method is applied to a cleaning system 1000 including the robot cleaner 100, the base station 200, the terminal 400, and the server 500.
The cleaning apparatus of the present application may include only the robot cleaner 100, or the cleaning apparatus includes the robot cleaner 100 and the base station 200 (or referred to as a dust collecting station), and the robot cleaner 100 and the base station 200 may be connected through a network to determine a current state (e.g., an electric quantity state, an operating state, position information, etc.) of the opposite terminal.
Wherein the sweeping robot 100 includes a processor 20, a memory 30, a body 40, a camera 50, and a radar 60; the processor 20 communicates with the camera 50 and the radar 60 via a network, respectively, and the camera 50, the radar 60 and the processor 20 are provided on the body 40 of the self-moving robot.
The camera 50 is used for acquiring scene images; the camera 50 may be a visible light camera (Red-Green-Blue, RGB), a visible light Depth camera (Red-Green-Blue-Depth, RGBD), an infrared camera, a thermal imaging camera, a Depth camera, etc., the RGB camera and the RGBD camera may capture a visible light image of a scene, the infrared camera may capture an infrared image of the scene, the thermal imaging camera may capture a thermal imaging image of the scene, and the Depth camera may capture a Depth image.
Optionally, the camera 50 includes one or more. The cameras 50 may be disposed at a side wall of the body 40, for example, the cameras 50 are disposed in a direction right in front of the robot 100 to collect images of a scene right in front of the robot 100, or the cameras 50 are disposed at both sides of the robot 100 to collect images of a scene at both sides of the robot 100 during the forward movement of the robot 100.
The radar 60 is used to acquire point cloud information of objects in a scene. The radar 60 is used to acquire point cloud information of objects in a scene. Radar 20 may be a lidar (Laser Direct Structuring, LDS), such as a Time of Flight (TOF) radar based on the TOF principle, a structured light principle based triangulation structured light radar, or the like.
The radar 60 is provided at a top wall of the robot cleaner 100. The radar 60 may be provided protruding from the ceiling wall, or the radar 60 may be provided within the fuselage 40 without protruding from the fuselage 40, that is, the height of the radar 60 may be lower than the height of the ceiling wall. Under the condition that the floor sweeping robot 100 receives the voice calling information, the processor 20 adjusts the gesture of the camera 50 on the body to shoot a target object so as to generate a scene image, a target area in the scene image shot by the camera 50 is identified, a target point cloud set corresponding to the target object is determined from the point cloud information acquired by the radar 60 according to the preset calibration relation between the camera 50 and the radar 60, finally, a target position to be cleaned is determined according to the target point cloud set, and the floor sweeping robot 100 is controlled to move to the target position for cleaning.
In one embodiment, the sweeping robot 100 further includes an audio acquisition assembly 70. The audio collection assembly 70 is disposed on the body 40 and communicates with the outside such that the audio collection assembly 70 can collect sound signals of the outside.
Optionally, the audio capturing assembly 70 further comprises an audio capturing component 71, the audio capturing component 71 may be a microphone, a sound sensor, or the like. The audio collection part 71 may include a plurality of microphones forming a microphone array, and the shape of the microphone array may be a cross array, a circular array, a rectangular array, a spiral array, or the like.
In one embodiment, the robot cleaner 100 further comprises a memory 30, the memory 30 being for storing a computer program 31 containing instructions for performing the cleaning method.
The base station 200 may include a display 201, and the base station 200 may be capable of communicating with the sweeping robot to obtain data transmitted by the sweeping robot 100, and may process the data by using a processing capability of the base station 200, so as to implement functions of controlling the sweeping robot 100 (e.g., controlling the sweeping robot 100 to move to a target position for cleaning), displaying relevant contents of the sweeping robot 100, and the like.
In one embodiment, the cleaning system 1000 further includes a terminal 400, the terminal 400 including a display 401. The terminal 400 can communicate with the sweeping robot 100 to obtain data transmitted by the sweeping robot 100, and can process the data by using the processing capability of the terminal 400, so as to realize functions of controlling the sweeping robot 100 (for example, controlling the sweeping robot 100 to move to a target position for cleaning), displaying related contents of the sweeping robot 100, and the like.
In one embodiment, the terminal 400 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and the like.
For example, at least one of the display 201 of the base station 200 and the display 401 of the terminal 400 may process the position information transmitted by the robot cleaner 100 to determine the position of the robot cleaner 100 in the map, and then display the current position of the robot cleaner 100 in the map, the moving track of the robot cleaner 100, and the like in real time. For another example, the state information transmitted from the robot cleaner 100 is processed to determine the current state of the robot cleaner 100, and then the current state of the robot cleaner 100 is displayed in real time.
In one embodiment, the cleaning system 1000 further includes a server 500, the server 500 and the cleaning device communicating over a network; the server is used for receiving the scene images and the point cloud information acquired by the camera 50 and the radar 60, identifying a target image in the scene images shot by the camera 50, and determining a target area of a target object in the scene images; determining a target point cloud set based on a preset calibration relation between the camera 50 and the radar 60; and determining a target position corresponding to the target area according to the target point cloud set, and indicating the sweeping robot 100 to move to the target area for cleaning.
In one embodiment, the server 500 may be a separate physical server 500, or may be a server 500 cluster or a distributed system formed by a plurality of physical servers 500, or may be a cloud server 500 that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The embodiments of the present application are not limited in this regard.
In one embodiment, the sweeping robot 100, the base station 200, the terminal 400 and the server 500 all communicate through a network, for example, any two of the sweeping robot 100, the base station 200, the terminal 400 and the server 500 may communicate through a wireless mode (for example, wireless local area network (Wireless Fidelity, wifi) communication, bluetooth communication, infrared communication, etc.). It is understood that the communication among the robot cleaner 100, the base station 200, the terminal 400, and the server 500 is not limited to the above-described communication manner, and is not limited thereto.
When wifi communication is performed, the sweeping robot 100 and the base station 200 communicate with the cloud server 500 respectively, and then the cloud server 500 realizes communication between the sweeping robot 100 and the base station 200 (or the terminal 400); when the communication is performed through bluetooth or infrared, the robot cleaner 100 and the base station 200 (or the terminal 400) are each provided with a corresponding communication module to directly implement communication therebetween.
In one embodiment, the cleaning method may be implemented by at least one of the robot cleaner 100, the base station 200, the terminal 400, and the server 500. Such as through the cooperation of the sweeping robot 100 with the base station 200, the terminal 400 and the server 500, or through the cooperation of the sweeping robot 100 with the base station 200, or through the cooperation of the sweeping robot 100 with the terminal 400, etc.
Referring to fig. 2 to 5, the cleaning method of the embodiment of the present application is applied to a cleaning device, the cleaning device includes a main body 40, a camera 50 and a radar 60 disposed on the main body 40, and the cleaning method includes:
step 011: under the condition that voice calling information is received, the posture of the body 40 is adjusted so that the camera 50 faces to a target direction, and the target direction is obtained by carrying out sound source positioning on the voice calling information;
taking the cleaning device as the sweeping robot 100 as an example, the sweeping robot 100 includes a body 40, and a camera 50 and a radar 60 disposed on the body 40, where the shape of the body 40 may be a circle or a rectangle, the camera 50 may be used to photograph the environment where the sweeping robot 100 is located and generate a scene image, and the radar 60 may be used to scan, detect the environment where the sweeping robot 100 is located and generate point cloud information (e.g., the point cloud information may be a set of point data of an appearance surface of an object detected by the radar 60). The robot cleaner 100 further comprises a processor 20 and a memory 30, the memory 30 being capable of storing a computer program 31 containing instructions for performing the cleaning method, the processor 20 being capable of executing the computer program 31 containing instructions for performing the cleaning method.
Specifically, the self-mobile robot can be awakened and enter the working mode through the voice awakening information sent by the user, then the sweeping robot 100 can turn the camera 50 on the body 40 to the target direction by adjusting the posture of the body 40, such as rotating the body 40 to the left by 30 degrees, rotating the body 40 to the right by 30 degrees, and the target direction is obtained by performing sound source positioning according to the user sending the voice awakening information, namely, the field of view range of the camera 50 can contain the user after the camera 50 turns to the target direction when the sweeping robot 100 receives the voice awakening information sent by the user (such as the voice awakening information can be "cleaning is needed here", "cleaning is started", and the like).
For example, after the sweeping robot 100 receives a "start sweeping" instruction from a user, the sweeping robot 100 performs sound source localization on the voice from the user, determines that the user is in the direction of 60 ° on the right side of the camera 50 of the sweeping robot 100, and adjusts the body 40 to rotate the body 40 by 60 ° to the right so that the camera 50 is directed to the direction of the user.
Step 012: identifying a target object M1 in a scene image shot by the camera 50, and determining a target area S1 in the scene image of the target object M1, wherein the target object M1 is a sound source object M1 sending out voice calling information;
Specifically, after the camera 50 faces the direction in which the user is located, the processor 20 controls the camera 50 to shoot, so that a scene image can be generated, wherein the camera 50 can be an RGB camera, an infrared camera, a thermal imaging camera, a depth camera and the like, when the light of the GRB camera is sufficient in daytime, the shot scene image is clearer, the target object M1 is more accurate, and when the light of the night is less, the infrared camera and the thermal imaging camera can shoot the target object M1, so that a clearer scene image can be obtained. After obtaining the scene image, the processor 20 can determine the target area S1 in the scene image by identifying the target object M1 in the scene image, where it should be noted that the target object M1 is the sound source object M1 that sends the voice call information, that is, the target object M1 may be the user; the target area S1 is a position of the user in the scene image, for example, the target area S1 may be a position of the user far to the left, right, up, down, or the like in the center of the scene image.
Step 013: acquiring a target point cloud set corresponding to the target area S1 in the point cloud information acquired by the radar 60 based on the target area S1 and a preset calibration relation between the camera 50 and the radar 60;
Specifically, the processor 20 can determine the target point cloud set corresponding to the target area S1 in the scene image according to the point cloud information collected by the radar 60 (for example, the radar 60 may be a laser radar) on the fuselage 40, and through the calibration relationship preset by the camera 50 and the radar 60.
For example, by a predetermined calibration relationship between the camera 50 and the radar 60, the internal reference and the external reference between the camera 50 and the radar 60 can be calibrated. The radar 60 can form a rectangular range on a plane by scanning the environment where the sweeping robot 100 is located by 360 degrees, and the point cloud generated by collecting the environment can be contained in the rectangular range of the point cloud generated by the radar 60 according to the point cloud converted by the internal and external parameters by the scene image generated by photographing by the camera 50. The target area S1 may be a block area surrounding the target object M1 in the field Jing Tuxiang, i.e., a block area surrounding the user' S human body in the field Jing Tuxiang, and the processor 20 can determine a range included in the block using an edge detection algorithm in image processing by recognizing the scene image, so that pixels included in the block can be obtained, and based on the external reference and the internal reference, the processor 20 can obtain a point cloud set corresponding to the pixels included in the block, i.e., a target point cloud set.
Alternatively, the target area S1 may be an area surrounded by a contour of the user 'S human body in the scene image, and the processor 20 may determine the contour of the user' S human body by identifying the scene image, so that pixels included in the area surrounded by the contour of the user 'S human body may be obtained, and based on the external reference and the internal reference, the processor 20 may obtain a point cloud set corresponding to the pixels included in the area surrounded by the contour of the user' S human body, that is, the target point cloud set.
Step 014: and determining the target position of the target area according to the target point cloud set, and moving to the target position for cleaning.
Specifically, after acquiring the target point cloud set corresponding to the target area S1, the processor 20 can convert the target point cloud set into the pixel set of the scene image according to the internal parameters and the external parameters between the camera 50 and the radar 60, thereby obtaining a target pixel set, or can convert the pixel set of the target area S1 into the point cloud set according to the internal parameters and the external parameters between the camera 50 and the radar 60, thereby obtaining a target point cloud set, and can further obtain the current position and the target position according to the converted pixel set or the point cloud set.
It should be noted that, the resolutions of the radar 60 and the camera 50 may be different, and in general, the resolution of the camera 50 is greater than the resolution of the radar 60, so when the pixel set in the target area S1 is converted into the point cloud set, the point cloud corresponding to the pixels in the pixel set may be absent in the point cloud set, so that the target point cloud set is not accurate enough, and the target point cloud set conversion pixel set does not have the condition that the point cloud conversion pixel is absent, so that an accurate target pixel set can be obtained.
The processor 20 can plan a moving path from the current position to the target position according to the acquired current position and target position, and control the robot cleaner 100 to move to the target position according to the planned moving path, wherein the target position is the position where the user is located.
Alternatively, when there is a barrier between the target scene and the current scene where the sweeping robot 100 is located, the sound source object M1 can send out voice movement information to the sweeping robot 100, and move to the target scene in the voice movement information if the sweeping robot 100 receives the voice movement information.
Specifically, in the case where there is a barrier between the target scene in which the sound source object M1 is located and the current scene in which the sweeping robot 100 is located, the camera 50 cannot capture a scene image including the sound source object M1, at this time, the user can send voice movement information to the sweeping robot 100, and when the sweeping robot 100 receives the voice movement information, the processor 20 can control the sweeping robot 100 to move to the target scene in which the sound source object M1 that sends the voice movement information is located.
For example, a cabinet is blocked between the user and the sweeping robot 100, at this time, the sweeping robot 100 cannot shoot an image of a scene where the user is located, the user may send out voice movement information, such as "move 3 meters leftwards", "move 2 meters forwards", or "move 1 meter rightwards", and the processor 20 may control the sweeping robot 100 to move to a target scene where the image of the scene containing the user is located according to the voice movement information.
In this way, when there is a shelter between the robot cleaner 100 and the user, the robot cleaner 100 is controlled to move into an area where the user can be photographed by voice movement information, so that the robot cleaner 100 and the user can be ensured to be in the same scene, and thus the camera 50 can photograph the user according to sound source positioning, so as to perform subsequent determination of the user position.
In this way, when the sweeping robot 100 receives the voice call information sent by the user and requiring sweeping, the sound source is positioned, the direction of the user (i.e. the sound source) is determined, the sweeping robot 100 automatically adjusts the posture of the body 40, so that the camera 50 installed on the body 40 of the sweeping robot 100 faces the direction of the user sending the voice call information, and the scene of the user is photographed by the camera 50, so as to obtain the scene image. Then, the target area S1 of the user in the scene image is determined by identifying the scene image, and the target point cloud set corresponding to the target area S1 of the user in the scene image in the point cloud information acquired by the radar 60 is acquired through the calibration relation preset by the radar 60 and the camera 50 installed on the body 40 of the robot 100. And then determining the target position of the user through the point cloud set, moving the target position from the current position to the target position, and finally cleaning the target position. Compared with the voice of cleaning through the calling sent by the user, the cleaning robot 100 determines the position of the user so as to clean the position of the user, the positioning error is not large because of the interference of the environmental noise, and the cleaning robot 100 can be accurately controlled to go to the position of the user for cleaning.
Referring to fig. 2, 4 and 6, in some embodiments, the cleaning apparatus further comprises an audio collection assembly, the audio collection assembly comprising a plurality of audio collection components, the audio collection components configured to collect voice call information, the cleaning method further comprising:
step 015: sound source localization is performed according to the time when the voice call information is collected by each of the audio collection parts 70 to determine the target direction in which the sound source object M1 is located and the estimated distance between the sound source object M1 and the cleaning device.
Step 016: judging whether the estimated distance is larger than a preset threshold value or not;
step 017: if yes, a step of identifying a target object M1 in the scene image shot by the camera 50 is entered;
step 018: if not, moving a target distance away from the target object M1 along the target direction so that the distance between the sound source object M1 and the cleaning device is greater than a preset threshold, and entering the target object M1 in the scene image shot by the recognition camera 50 after the movement is completed.
The target distance is determined according to the estimated distance and a preset threshold value.
Specifically, the sweeping robot 100 is provided with a microphone array 70, the microphone array 70 includes a microphone 71, the microphone 71 can be used to collect voice call information sent by a user (for example, the microphone 71 can collect voice call information such as "start sweeping", "here need sweeping", etc. sent by the user), the number of the microphones 71 can be multiple (for example, the number of the microphones 71 can be 16, 32, etc.), which is not limited herein, and the microphone array 70 can be a cross array, a circular array, a rectangular array, a spiral array, etc.
The processor 20 can perform sound source localization on the target object M1 by acquiring the time when each microphone 71 collects the voice call information sent by the target object M1, and can calculate the distance between the sound source and each microphone 71 according to the time when the call voice information arrives at different microphones 71 with different degrees of delay (may also be called time delay) and the propagation speed of sound in the air being 340 meters per second, so as to determine the estimated distance between the sound source object M1 and the sweeping robot 100, further determine the position of the sound source relative to the sweeping robot 100, and determine the target direction of the sound source object M1 relative to the sweeping robot 100 according to the position between the sound source and the sweeping robot 100.
Alternatively, when the camera 50 is disposed right in front of the body 40 of the robot cleaner 100, the target direction is right in front of the robot cleaner 100; when the camera 50 is provided on the body 40 of the robot cleaner 100 that is offset to the right by 30 degrees with respect to the right front or on the body 40 that is offset to the left by 30 degrees with respect to the right front, the target direction is a direction that is offset to the right by 30 degrees with respect to the right front or a direction that is offset to the left by 30 degrees with respect to the right front of the robot cleaner 100.
In this way, by receiving the voice call information through the microphone 71 on the body 40 of the robot cleaner 100 and performing the sound source localization, the direction in which the user is opposite to the direction in which the self-moving machine is opposite to and the estimated distance between the user and the robot cleaner 100 can be determined, so that the posture of the robot cleaner 100 to be adjusted can be determined, so that the camera 50 faces the user and the estimated distance between the user and the robot cleaner 100.
Next, after the processor 20 obtains the estimated distance between the sound source and the sweeping robot 100 by performing sound source localization, the processor 20 determines whether the estimated distance is greater than a preset threshold (for example, the preset threshold may be 3 meters, 4 meters, etc.) according to the preset threshold, so that the scene image shot by the camera 50 can display the whole body of the sound source object M1.
In the case that the processor 20 determines that the estimated distance between the sound source object M1 and the sweeping robot 100 is greater than or equal to the preset threshold, the processor 20 controls the camera 50 to take a picture to generate a scene image, and then controls the sweeping robot 100 to enter a step of identifying the target object M1 in the scene image taken by the camera 50.
For example, if the estimated distance between the sound source object M1 and the robot 100 is 10 meters, the preset threshold is 3 meters, the target direction is the direction directly in front of the robot 100, and the estimated distance between the sound source object M1 and the robot 100 is greater than the preset threshold, the processor 20 controls the camera 50 to perform shooting to generate a scene image, and then controls the robot 100 to identify the target object M1 in the scene image shot by the camera 50.
In the case that the processor 20 judges that the estimated distance between the sound source object M1 and the sweeping robot 100 is less than the preset threshold, the processor 20 controls the sweeping robot 100 to move the target distance in the target direction such that the sweeping robot 100 is far away from the target object M1 and such that the distance between the sound source object M1 and the sweeping robot 100 is greater than the preset threshold, and then the processor 20 controls the sweeping robot 100 to enter the step of recognizing the target object M1 in the scene image photographed by the camera 50.
For example, if the estimated distance between the sound source object M1 and the robot cleaner 100 is 2 meters, the preset threshold is 3 meters, and the target direction is the direction directly in front of the robot cleaner 100, the processor 20 controls the robot cleaner 100 to move at least 1 meter to the front and rear so that the distance between the sound source object M1 and the robot cleaner 100 is greater than the preset threshold by 3 meters, then controls the camera 50 to perform photographing to generate a scene image, and finally controls the robot cleaner 100 to enter a step of recognizing the target object M1 in the scene image photographed by the camera 50.
In this way, by judging the approximate distance between the user and the sweeping robot 100 and moving the sweeping robot 100 away from the user in the case that the sweeping robot 100 is too close to the user, the camera 50 of the sweeping robot 100 can shoot the whole body of the user as much as possible, so that the subsequent human body recognition is facilitated.
Referring to fig. 3 and fig. 7, in some embodiments, before the step of acquiring the point cloud information acquired by the radar 60 based on the target area S1 and the preset calibration relation between the camera 50 and the radar 60, the cleaning method further includes:
step 019: the posture of the body 40 is adjusted so that the center of the target area S1 is located at a preset position in the scene image.
Specifically, the processor 20 needs to adjust the posture of the body 40 of the robot 100 before the step of acquiring the point cloud information acquired by the radar 60 based on the target area S1 and the preset calibration relation between the camera 50 and the radar 60, where the preset position may be a horizontal middle position in the scene image, that is, a middle position in the horizontal direction in the field Jing Tuxiang, so that the center of the target area S1 is located at a preset position in the scene image.
For example, when the orientation of the camera 50 is set directly in front of the sweeping robot 100, the camera 50 photographs the generated scene image, and the target area S1 is located exactly in the horizontal middle position of the scene image; when the camera 50 is arranged in a direction of being deviated by 30 degrees to the right from the right in front of the sweeping robot 100, the body 40 of the sweeping robot 100 needs to be adjusted to rotate by 30 degrees to the left so that the direction of the camera 50 is positioned in the direction of being directly in front of the sweeping robot 100, and thus the photographed target area S1 is positioned in the horizontal middle position of the scene image; when the camera 50 is disposed in a direction of 60 degrees to the left right in front of the robot cleaner 100, it is necessary to adjust the body 40 of the robot cleaner 100 to rotate by 60 degrees to the right so that the orientation of the camera 50 is positioned in the direction of the right front of the robot cleaner 100, thereby positioning the photographed target area S1 in the horizontal middle of the scene image.
In this way, the camera 50 is generally oriented directly in front of the sweeping robot 100, and the target area S1 is located in the horizontal middle position of the scene image by adjusting the posture of the body 40, so that the directly in front of the sweeping robot is aligned to the user, and the sweeping robot can move to the position where the user is located only by directly walking.
Please participate in fig. 8, in certain embodiments, step 014: determining the target location of the target region from the target point cloud comprises:
step 0141: determining a target point cloud set, wherein distance values between each point cloud and the cleaning equipment;
step 0142: and determining target values in the distance values, determining target point clouds corresponding to the target values, wherein the target positions are positions corresponding to the target point clouds, and the target values comprise any one of a minimum value, a maximum value, an average value and a median value.
Specifically, after the target point cloud set is acquired, the processor 20 needs to calculate the distances between the target object M1 corresponding to each point cloud in the target point cloud set and the sweeping robot 100, so as to obtain the distances corresponding to each point cloud in the target point cloud set.
The processor 20 may sort the obtained plurality of distance values according to the distance size, determine a target value of the plurality of distance values, wherein the target value may be any one of a maximum value, a minimum value, and a median value of the plurality of distance values, and then add and divide the obtained plurality of distance values by the number of point clouds to obtain an average value of the plurality of distance values, thereby determining a target point cloud corresponding to any one of the minimum value, the maximum value, the median value, and the median value according to the obtained minimum value, the maximum value, the median value, and the average value, and further determining a target position according to the target point cloud.
Optionally, in the case that the target area S1 in the scene image is formed along the contour of the user' S human body, any one of the minimum value, the maximum value or the average value of the multiple distance values is used to determine the target point cloud, so that the target point cloud is at least the point cloud on the human body, and the accuracy of determining the position between the user and the self-moving robot is ensured; under the condition that a target area S1 in the scene image is a block diagram enclosing the human body outline of the user, the position of the target point cloud corresponding to the median value obtained after the sorting from large to small is determined, so that the target point cloud is at least the point cloud on the human body, and the position determination accuracy between the user and the self-moving robot is ensured.
In this way, the distance value between each point cloud and the sweeping robot 100 is determined by the point cloud set, and the target point cloud corresponding to any one of the minimum value, the maximum value, the average value and the median value of the plurality of distance values is determined, and the position corresponding to the target point cloud is the position between the user and the sweeping robot 100, so that the position between the user and the sweeping robot 100 can be accurately determined.
Referring to fig. 9 and 10, in some embodiments, the voice call information includes a normal cleaning mode, and the cleaning method further includes:
Step 020: when the cleaning mode corresponding to the voice calling information is the common cleaning mode, planning an area S2 to be cleaned according to the target position;
step 021: a cleaning path is planned in the cleaning region S2, and is moved along the cleaning path to clean the cleaning region S2.
Specifically, the cleaning mode of the sweeping robot 100 may be a general cleaning mode. When the target object M1 emits voice call information and the robot cleaner 100 is in the normal cleaning mode, the processor 20 can plan the cleaning area S2 according to the target position where the target object M1 is located, and the cleaning area S2 may be a circular area centered on the target position or the cleaning area S2 may be a rectangular area centered on the target position.
For example, the area S2 to be cleaned may be a circular area with a radius of 3 meters centered on the user; or the area waiting to be cleaned may be a rectangular area of length 4 meters and width 3 meters with the user as the center of symmetry.
After the processor 20 determines the cleaning area S2 to be cleaned by the robot cleaner 100, the processor 20 can plan a cleaning path of the robot cleaner 100 at the cleaning area S2 and control the robot cleaner 100 to move according to the cleaning path to clean the cleaning area S2.
For example, when the shape of the area S2 to be cleaned is a circular area, the processor 20 can plan the cleaning path along which the robot 100 moves to clean in a swirl shape from the periphery of the circular area; when the shape of the area to be cleaned S2 is a rectangular area, the processor 20 can plan the cleaning path along which the robot 100 moves to clean from one corner of the rectangular area in an arcuate shape.
In this manner, by setting the floor sweeping robot 100 to the normal cleaning mode, the user position determined from the position where the voice is made can be determined, thereby determining the cleaning region S2 to be cleaned, and also determining the planned cleaning path L1.
Referring to fig. 11, in some embodiments, the voice call information includes following a cleaning mode, the cleaning method further comprising:
step 022: when the cleaning mode corresponding to the voice calling information is the following cleaning mode, the following target object M1 moves according to the target position corresponding to the target area S1 of the continuous multi-frame scene images so as to clean the area corresponding to the moving track of the target object M1.
Specifically, the cleaning mode of the sweeping robot 100 may be a follow cleaning mode. When the sound source object M1 sends out voice calling information and the sweeping robot 100 is in the following cleaning mode, the processor 20 can acquire multi-frame scene images shot by the camera 50, and control the sweeping robot 100 to move according to the target positions appearing in the continuous multi-frame scene images according to the target positions corresponding to the target areas S1 in the continuous multi-frame scene images, so that the sweeping robot 100 can move along with the target object M1, and the area corresponding to the moving track of the target object M1 can be cleaned.
For example, the cleaning mode of the robot cleaner 100 is set to follow the cleaning mode, and after the user issues a voice call message of "start cleaning", the robot cleaner 100 can take a picture of the user through the camera 50 after receiving the voice call message, so that the user's position can be determined in the scene image and move in accordance with the path planned by the determined user's position. After the robot cleaner 100 moves to the vicinity of the user, the user may move in the cleaning area S2, and the robot cleaner 100 continuously photographs the user through the camera 50, thereby forming scene images of multiple continuous frames, and the robot cleaner 100 follows the user according to the positions corresponding to the user of the scene images of multiple continuous frames.
Therefore, by setting the sweeping robot 100 to the following cleaning mode, the sweeping robot 100 can follow the user to clean the scene, and compared with the common cleaning mode, under the condition that a plurality of dirt exists in the scene, the user needs to send out voices for many times to clean the area where each dirt is located, so that the operation is complex, the user experience is poor, in the following cleaning mode, only the user needs to send out voices once, and the sweeping robot 100 can follow the area where each dirt is cleaned by the user, so that the operation is simple.
Referring to fig. 12 and 13, a cleaning display method according to an embodiment of the present application includes:
step 031: displaying the current position and the current gesture of the cleaning equipment in the map and a target object M1 sending out voice calling information, wherein the cleaning equipment faces the target object M1, and the current position and the current gesture are determined according to the current gesture information of the cleaning equipment;
step 032: a real-time moving path of the cleaning device from the current position to the target position corresponding to the target object M1 is displayed.
The main execution body of the cleaning display method is a computer program product corresponding to the sweeping robot 100, and the computer program product can be arranged at the terminal 400 or the base station 200; such as an application installed in the terminal 400 or the base station 200.
For example, when the computer program product is run on the base station 200 to implement the cleaning display method, the robot cleaner 100 can transmit information such as a movement track, a pose, and a position of a user of the robot cleaner 100 to the base station 200, and after the base station 200 processes the information, the display screen of the base station 200 can display the information such as the movement track, the pose, and the position of the user of the robot cleaner 100.
For another example, when the computer program product is run on the terminal 400 to implement the cleaning display method, the robot cleaner 100 can upload information such as a movement track, a pose, and a position of a user of the robot cleaner 100 to the server 500, and then the terminal 400 obtains information from the server 500 to display the information such as the movement track, the pose, and the position of the user of the robot cleaner 100.
Specifically, the robot cleaner 100 is provided with a wireless communication module, and can perform wireless communication (such as bluetooth communication, wifi communication, etc.) with the terminal 400 of the user, and the user can perform cleaning display on the robot cleaner 100 through the display screen 80 of the terminal 400.
According to the current pose information of the robot cleaner 100, the current position of the robot cleaner 100 in the map, the current pose of the robot cleaner 100, and the positions of the target object M1 and the target object M1 that send out voice call information can be displayed on the terminal 400 display screen 80 of the user, and a real-time moving path of the robot cleaner 100 from the current position to the target position can also be displayed on the terminal 400 display screen 80.
For example, the robot cleaner 100 can perform bluetooth communication with a smart phone of a user, and the user can see the user who needs to use the robot cleaner 100 to send out voice call information and the current location of the user on the display screen 80 of the smart phone; after the floor sweeping robot 100 receives the voice calling information, the current position of the floor sweeping robot 100 and the gesture of the floor sweeping robot 100 can be displayed on the display screen 80 of the smart phone (for example, the gesture of the floor sweeping robot 100 can be that the right front of the floor sweeping robot 100 is opposite to the user, and the right front of the floor sweeping robot 100 is deviated from the user); when the robot 100 starts from the current position to the target position, the real-time moving path of the robot 100 can be displayed on the display screen 80 of the smart phone (the real-time moving path may be a straight moving path or an arcuate moving path).
In this way, by displaying the pose of the robot cleaner 100 on the map, the position of the user, the direction and the moving path of the robot cleaner 100 with respect to the user on the display screen 80, the user can clearly understand the respective positions of the robot cleaner 100 and the user, the pose of the robot cleaner 100, and the relative position of the two through the display screen 80.
Optionally, the cleaning display method further includes displaying a real-time cleaning path L2 of the cleaning robot 100 when the cleaning region S2 is cleaned, and the cleaning region S2 is determined according to the target position.
Specifically, the processor 20 can determine the cleaning area S2 through the target position where the target object M1 is located, and after controlling the robot cleaner 100 to move to the cleaning area S2 to be cleaned, can display the planned cleaning path L1 where the robot cleaner 100 moves in the cleaning area S2 and the real-time cleaning path L2 when cleaning is performed on the display screen 80 of the terminal 400 that communicates with the robot cleaner 100.
For example, the processor 20 can determine that the area S2 to be cleaned is a rectangular area with the sound source object M1 as a center of symmetry through the target position where the target object M1 is located, and display a planned cleaning path L1 of the robot cleaner 100 in the rectangular area (e.g., the planned cleaning path L1 is in an arcuate shape) on the display screen 80 of the smartphone. When the robot cleaner 100 starts cleaning along the planned cleaning path L1, the real-time cleaning path L2 of the robot cleaner 100, i.e., the area where the robot cleaner 100 has cleaned, can be displayed on the display screen 80 of the smart phone.
In this way, by displaying the real-time cleaning path L2 of the robot cleaner 100 at the time of cleaning the cleaning region S2 on the display screen 80, it is possible for the user to clearly know the progress of cleaning of the robot cleaner 100.
Referring to fig. 14, in order to better implement the cleaning method according to the embodiment of the present application, the embodiment of the present application further provides a cleaning device 10, which is applied to a cleaning apparatus, and the cleaning apparatus includes a main body 40, and a camera 50 and a radar 60 disposed on the main body 40. The cleaning device 10 may include a first adjustment module 11, a first determination module 12, an acquisition module 13, and an execution module 14. The first adjusting module 11 is configured to adjust the posture of the body 40 so that the camera 50 faces a target direction when receiving the voice call information, where the target direction is obtained by performing sound source localization on the voice call information; the first determining module 12 is configured to identify a target object M1 in the scene image captured by the camera 50, and determine a target area S1 in the scene image of the target object M1, where the target object M1 is a sound source object M1 that sends out voice call information; the acquiring module 13 is configured to acquire a target point cloud set corresponding to the target area S1 from the point cloud information acquired by the radar 60 based on the target area S1 and a preset calibration relationship between the camera 50 and the radar 60; the execution module 14 is configured to determine a target position of the target area from the target point cloud set, and move to the target position for cleaning.
The cleaning apparatus further includes a second determining module 15, where the second determining module 15 is configured to perform sound source localization according to the time when the voice call information is collected by each of the audio collecting components 70, so as to determine the target direction in which the sound source object M1 is located and the estimated distance between the sound source object M1 and the cleaning device.
The cleaning device further comprises a judging module 16, wherein the judging module 16 is used for judging whether the estimated distance is larger than a preset threshold value; if yes, a step of identifying a target object M1 in the scene image shot by the camera 50 is entered; if not, moving a target distance away from the target object M1 along the target direction so that the distance between the sound source object M1 and the cleaning device is greater than a preset threshold, and entering the target object M1 in the scene image shot by the recognition camera 50 after the movement is completed.
The cleaning apparatus further includes a second adjustment module 17 for adjusting the posture of the main body 40 so that the center of the target area S1 is located at a preset position in the scene image.
The execution module 14 is specifically configured to determine a set of target point clouds, and a distance value between each point cloud and the cleaning device; and determining target values in the distance values, determining target point clouds corresponding to the target values, wherein the target positions are positions corresponding to the target point clouds, and the target values comprise any one of a minimum value, a maximum value, an average value and a median value.
The cleaning device further includes a first planning module 18, where the first planning module 18 is configured to plan the cleaning area S2 according to the target location when the cleaning mode corresponding to the voice call information is a normal cleaning mode.
The cleaning apparatus further comprises a second planning module 19, wherein the second planning module 19 is configured to plan a cleaning path in the cleaning area S2 and move along the cleaning path to clean the cleaning area S2.
The cleaning device further includes a cleaning module 20, where the cleaning module 20 is configured to, when the cleaning mode corresponding to the voice call information is the following cleaning mode, move along with the target object M1 according to the target position corresponding to the target area S1 of the continuous multi-frame scene images, so as to clean the area corresponding to the movement track of the target object M1.
The cleaning device 10 has been described above in connection with the accompanying drawings from the perspective of functional modules, which may be implemented in hardware, instructions in software, or a combination of hardware and software modules. Specifically, each step of the method embodiments in the embodiments of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software form, and the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented as a hardware encoding processor or implemented by a combination of hardware and software modules in the encoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Referring to fig. 15, in order to better implement the cleaning display method according to the embodiment of the present application, the embodiment of the present application further provides a cleaning display device 90, where the cleaning display device 90 includes a first display module 91 and a second display module 92. The first display module 91 is configured to display a current position and an attitude of the cleaning device in the map, and a target object M1 that sends out voice call information, where the cleaning device faces the target object M1, and the current position and the attitude are determined according to the current pose information of the cleaning device; the second display module 92 is used for displaying a real-time moving path of the cleaning device from the current position to the target position corresponding to the target object M1.
The cleaning display 90 is described above in connection with the figures from the perspective of functional modules, which may be implemented in hardware, instructions in software, or a combination of hardware and software modules. Specifically, each step of the method embodiments in the embodiments of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software form, and the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented as a hardware encoding processor or implemented by a combination of hardware and software modules in the encoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Referring again to fig. 2, the cleaning apparatus of the present embodiment includes a processor 20, a memory 30 and a computer program 31, wherein the computer program 31 is stored in the memory 30 and executed by the processor 20, and the computer program 31 includes instructions for executing the cleaning method of any of the foregoing embodiments, which are not described herein for brevity.
The computer program product of the present application embodiment includes a computer program including instructions for performing the cleaning display method of any of the above embodiments, and for brevity, will not be described in detail herein.
Referring to fig. 16, the embodiment of the present application further provides a computer readable storage medium 300, on which a computer program 310 is stored, where the computer program 310, when executed by the processor 320, implements the steps of the cleaning method of any of the above embodiments, which is not described herein for brevity.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A cleaning method, characterized in that it is applied to a cleaning device, the cleaning device including a body, and a camera and a radar provided to the body, the method comprising:
under the condition that voice calling information is received, adjusting the posture of the machine body to enable the camera to face a target direction, wherein the target direction is obtained by carrying out sound source positioning on the voice calling information;
Identifying a target object in a scene image shot by the camera, and determining a target area of the target object in the scene image, wherein the target object is a sound source object which sends out the voice calling information;
acquiring a target point cloud set corresponding to the target area in the point cloud information acquired by the radar based on the target area and a preset calibration relation between the camera and the radar;
and determining the target position of the target area according to the target point cloud set, and moving to the target position for cleaning.
2. The cleaning method of claim 1, wherein the cleaning device further comprises an audio collection assembly comprising a plurality of audio collection components for collecting the voice call information, the method further comprising:
performing sound source localization according to the time when each audio acquisition component acquires the voice calling information so as to determine the target direction in which the sound source object is positioned and the estimated distance between the sound source object and the cleaning equipment;
judging whether the estimated distance is larger than a preset threshold value or not;
if yes, entering the step of identifying the target object in the scene image shot by the camera;
If not, moving a target distance away from the target object along the target direction so that the distance between the sound source object and the cleaning device is larger than the preset threshold value, and entering the step of identifying the target object in the scene image shot by the camera after the movement is completed;
the target distance is determined according to the estimated distance and the preset threshold value.
3. The cleaning method according to claim 2, wherein before the step of acquiring the target point cloud corresponding to the target area from the point cloud information acquired by the radar based on the target area and the preset calibration relation between the camera and the radar, the method further comprises:
and adjusting the posture of the airframe so that the center of the target area is positioned at a preset position in the scene image.
4. The cleaning method of claim 1, wherein the determining the target location of the target area from the target point cloud comprises:
determining a distance value between each point cloud and the cleaning equipment in the target point cloud set;
and determining target values in the distance values, and determining target point clouds corresponding to the target values, wherein the target positions are positions corresponding to the target point clouds, and the target values comprise any one of a minimum value, a maximum value, an average value and a median value.
5. The cleaning method of claim 1, wherein the voice call information includes a normal cleaning mode, the method further comprising:
when the cleaning mode corresponding to the voice calling information is the common cleaning mode, planning an area to be cleaned according to the target position;
a cleaning path is planned in the area to be cleaned and moved along the cleaning path to clean the area to be cleaned.
6. The cleaning method of claim 1, wherein the voice call information includes a follow-up cleaning mode, the method further comprising:
and when the cleaning mode corresponding to the voice calling information is the following cleaning mode, the target object is followed to move according to the target position corresponding to the target area of the scene images of the continuous multiframes so as to clean the area corresponding to the moving track of the target object.
7. A clean display method, comprising:
displaying the current position and the current gesture of the cleaning equipment in a map and a target object sending out voice calling information, wherein the cleaning equipment faces the target object, and the current position and the current gesture are determined according to the current gesture information of the cleaning equipment;
And displaying a real-time moving path of the cleaning equipment from the current position to a target position corresponding to the target object.
8. A cleaning apparatus, comprising:
a processor, a memory; a kind of electronic device with high-pressure air-conditioning system
A computer program, wherein the computer program is stored in the memory and executed by the processor, the computer program comprising instructions for performing the cleaning method of any one of claims 1 to 6.
9. A computer program product comprising a computer program comprising instructions for performing the cleaning presentation method of claim 7.
10. A non-transitory computer readable storage medium containing a computer program which, when executed by a processor, causes the processor to perform the cleaning method of any one of claims 1 to 6 and the cleaning presentation method of claim 7.
CN202311283626.3A 2023-09-28 2023-09-28 Cleaning method, cleaning display method, cleaning apparatus, and storage medium Pending CN117257170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311283626.3A CN117257170A (en) 2023-09-28 2023-09-28 Cleaning method, cleaning display method, cleaning apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311283626.3A CN117257170A (en) 2023-09-28 2023-09-28 Cleaning method, cleaning display method, cleaning apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN117257170A true CN117257170A (en) 2023-12-22

Family

ID=89221212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311283626.3A Pending CN117257170A (en) 2023-09-28 2023-09-28 Cleaning method, cleaning display method, cleaning apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN117257170A (en)

Similar Documents

Publication Publication Date Title
CN113284240B (en) Map construction method and device, electronic equipment and storage medium
CN114847803B (en) Positioning method and device of robot, electronic equipment and storage medium
US10250789B2 (en) Electronic device with modulated light flash operation for rolling shutter image sensor
KR101950558B1 (en) Pose estimation apparatus and vacuum cleaner system
US20160178728A1 (en) Indoor Positioning Terminal, Network, System and Method
CN108459597B (en) Mobile electronic device and method for processing tasks in task area
KR20220028042A (en) Pose determination method, apparatus, electronic device, storage medium and program
CN112207821B (en) Target searching method of visual robot and robot
WO2021088960A1 (en) Model acquisition method, object pre-determination method and devices
CN108332660B (en) Robot three-dimensional scanning system and scanning method
CN107343165A (en) A kind of monitoring method, equipment and system
CN106934402A (en) Indoor moving video tracking positions auxiliary shooting method and device
CN207067803U (en) A kind of mobile electronic device for being used to handle the task of mission area
US10997668B1 (en) Providing shade for optical detection of structural features
JP2017027417A (en) Image processing device and vacuum cleaner
CN111103593A (en) Distance measurement module, robot, distance measurement method and non-volatile readable storage medium
WO2020019130A1 (en) Motion estimation method and mobile device
CN117257170A (en) Cleaning method, cleaning display method, cleaning apparatus, and storage medium
US11943539B2 (en) Systems and methods for capturing and generating panoramic three-dimensional models and images
WO2021177471A1 (en) Detection device, tracking device, detection program, and tracking program
CN110087002B (en) Shooting method and terminal equipment
JP2021103410A (en) Mobile body and imaging system
WO2019037517A1 (en) Mobile electronic device and method for processing task in task area
WO2018161322A1 (en) Depth-based image processing method, processing device and electronic device
CN108574801A (en) A kind of image-pickup method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination