CN117770713A - Control method and device of cleaning robot and cleaning robot - Google Patents

Control method and device of cleaning robot and cleaning robot Download PDF

Info

Publication number
CN117770713A
CN117770713A CN202211157471.4A CN202211157471A CN117770713A CN 117770713 A CN117770713 A CN 117770713A CN 202211157471 A CN202211157471 A CN 202211157471A CN 117770713 A CN117770713 A CN 117770713A
Authority
CN
China
Prior art keywords
cleaning
voice
instruction
cleaning robot
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211157471.4A
Other languages
Chinese (zh)
Inventor
高振东
李志昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Positec Power Tools Suzhou Co Ltd
Original Assignee
Positec Power Tools Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Positec Power Tools Suzhou Co Ltd filed Critical Positec Power Tools Suzhou Co Ltd
Priority to CN202211157471.4A priority Critical patent/CN117770713A/en
Publication of CN117770713A publication Critical patent/CN117770713A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present disclosure relates to a control method of a cleaning robot, a cleaning robot and a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring a target area corresponding to a cleaning instruction, wherein the target area comprises a target position to be cleaned; controlling the cleaning robot to travel from the current position to the target area; and identifying the target position in the target area, controlling the cleaning robot to travel to the target position, and cleaning a preset range area of the target position. The method can realize accurate positioning of fixed-point cleaning.

Description

Control method and device of cleaning robot and cleaning robot
Technical Field
The present disclosure relates to the field of automation technology, and in particular, to a control method of a cleaning robot, a computer device, and a storage medium.
Background
Cleaning robots mainly perform household sanitation automatic cleaning, washing and other tasks, and according to the application range and the application, the cleaning robots are of various types, such as floor sweeping robots, mopping robots and the like. When the user needs fixed-point cleaning, the cleaning robot can be called by voice, and the cleaning robot reaches the sound source position according to the relative position between the sound source and the robot, so that fixed-point cleaning is realized.
However, in the related art, the cleaning robot cannot accurately judge the sound source position, and there is an error in positioning, so that accurate fixed-point cleaning cannot be realized.
Disclosure of Invention
Based on this, it is necessary to provide a control method of a cleaning robot, and a computer device capable of precisely positioning to achieve fixed-point cleaning, in view of the above-described technical problems.
In a first aspect, embodiments of the present disclosure provide a control method of a cleaning robot. The method comprises the following steps:
acquiring a target area corresponding to a cleaning instruction, wherein the target area comprises a target position to be cleaned;
controlling the cleaning robot to travel from the current position to the target area;
and identifying the target position in the target area, controlling the cleaning robot to travel to the target position, and cleaning a preset range area of the target position.
In one embodiment, the identifying, in the target area, the target location where the stain area is located includes:
and acquiring a first voice cleaning instruction, and determining the target position as the sound source position of the first voice cleaning instruction.
In one embodiment, the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
Acquiring a second voice cleaning instruction;
identifying the second voice cleaning instruction to obtain identification information;
and determining a target area matched with the identification information according to the association relation between the identification information and the target area.
In one embodiment, the identifying the second voice cleaning instruction, to obtain the identification information, includes:
and under the condition that a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot, identifying the second voice cleaning instruction to obtain identification information.
In one embodiment, the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
acquiring a second voice cleaning instruction;
determining an area where the sound source position of the second voice cleaning instruction is located as a target area under the condition that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot;
the acquiring the first voice cleaning instruction, determining the target position as the sound source position of the first voice cleaning instruction, includes:
and taking the second voice cleaning instruction as a first voice cleaning instruction, and determining the target position as the sound source position of the second voice cleaning instruction.
In one embodiment, the determining whether the obstruction exists includes:
acquiring the signal intensity and/or the angular resolution of the second voice cleaning instruction;
and determining whether a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot according to the signal intensity and/or the angle resolution.
In one embodiment, the controlling the cleaning robot to travel from the current position to the target area includes:
acquiring the current position of the cleaning robot and a preset working map, wherein the working map comprises the target area;
determining a driving path from the current position to the target area according to the working map;
and controlling the cleaning robot to travel to the target area according to the travel path.
In one embodiment, the method for acquiring the working map includes:
acquiring an initial working map;
and carrying out area division on the initial working map to obtain a working map comprising a plurality of areas, wherein each area in the plurality of areas corresponds to identification information.
In a second aspect, the embodiment of the disclosure also provides a control device of the cleaning robot. The device comprises:
The acquisition module is used for acquiring a target area corresponding to the cleaning instruction, wherein the target area comprises a target position to be cleaned;
the control module is used for controlling the cleaning robot to travel from the current position to the target area; and identifying the target position in the target area, controlling the cleaning robot to travel to the target position, and cleaning a preset range area of the target position.
In one embodiment, the control module includes:
the first acquisition sub-module is used for acquiring a first voice cleaning instruction and determining the target position as the sound source position of the first voice cleaning instruction.
In one embodiment, the cleaning instructions include a second voice cleaning instruction, and the acquisition module includes:
the second acquisition sub-module is used for acquiring a second voice cleaning instruction;
the identification module is used for identifying the second voice cleaning instruction to obtain identification information;
and the first determining submodule is used for determining the target area matched with the identification information according to the association relation between the identification information and the target area.
In one embodiment, the identification module includes:
And the identification sub-module is used for identifying the second voice cleaning instruction to obtain identification information under the condition that a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot.
In one embodiment, the cleaning instructions include a second voice cleaning instruction, and the first acquisition sub-module includes:
the third acquisition sub-module is used for acquiring a second voice cleaning instruction;
the second determining submodule is used for determining an area where the sound source position of the second voice cleaning instruction is located as a target area under the condition that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot;
the control module comprises:
and the third determining submodule is used for taking the second voice cleaning instruction as a first voice cleaning instruction and determining the target position as the sound source position of the second voice cleaning instruction.
In one embodiment, the determining module for determining whether the obstruction is present includes:
a fourth obtaining sub-module, configured to obtain signal strength and/or angular resolution of the second voice cleaning instruction;
and the fourth determining submodule is used for determining whether a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot according to the signal intensity and/or the angle resolution.
In one embodiment, the control module includes:
a fifth obtaining sub-module, configured to obtain a current position of the cleaning robot and a preset working map, where the working map includes the target area;
a fifth determining submodule, configured to determine a travel path from the current position to the target area according to the working map;
and the control sub-module is used for controlling the cleaning robot to travel to the target area according to the travel path.
In one embodiment, the working map obtaining module includes:
a sixth acquisition sub-module, configured to acquire an initial working map;
the dividing module is used for dividing the initial working map into areas to obtain a working map comprising a plurality of areas, wherein each area in the plurality of areas corresponds to identification information.
In a third aspect, the disclosed embodiments also provide a cleaning robot including:
a body;
the moving assembly is arranged on the machine body and used for driving the machine body to move;
the cleaning component is arranged on the machine body and is used for executing a cleaning task according to the set cleaning parameters;
a memory for storing instructions executable by the processor;
The processor comprises an instruction identification unit, is arranged on the machine body, is electrically connected with the moving assembly, the cleaning assembly and the memory, and is used for realizing the control method of the cleaning robot in any one of the embodiments of the disclosure when executing the instruction.
In one embodiment, the cleaning robot further comprises a sound source positioning assembly disposed on the body for receiving a voice cleaning command.
In one embodiment, the instruction recognition unit includes a semantic recognition unit.
In one embodiment, the cleaning robot further comprises at least one of the following components:
the visual sensor assembly is arranged on the machine body, is electrically connected with the processor and is used for acquiring image data of a preset range of the position of the cleaning robot;
and the laser radar component is arranged on the machine body, is electrically connected with the processor and is used for acquiring laser point cloud data of a preset range of the position of the cleaning robot.
In a fourth aspect, embodiments of the present disclosure also provide a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments of the present disclosure when the computer program is executed.
In a fifth aspect, embodiments of the present disclosure also provide a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments of the present disclosure.
In a sixth aspect, embodiments of the present disclosure also provide a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the method according to any of the embodiments of the present disclosure.
According to the embodiment of the disclosure, when the cleaning robot needs to be controlled to clean the target position, a cleaning instruction is firstly acquired, a target area comprising the target position to be cleaned is acquired according to the cleaning instruction, the cleaning robot is controlled to travel to the target area, the target position is determined in the target area, the cleaning robot is controlled to travel to the target position, and cleaning is performed in a preset range area of the target position, so that fixed-point cleaning can be realized; the method comprises the steps of firstly determining a target area corresponding to the target position, determining the target position after the target area is reached, and then cleaning the target position, so that the problem that the cleaning robot cannot accurately acquire the target position due to the fact that the distance is long, a shielding object exists and other factors is avoided, accurate positioning during fixed-point cleaning is guaranteed, and user experience is improved.
Drawings
FIG. 1 is a flow chart of a control method of a cleaning robot in one embodiment;
FIG. 2 is a schematic view of a control method of a cleaning robot in one embodiment;
FIG. 3 is a schematic view of a control method of a cleaning robot in one embodiment;
FIG. 4 is a flow chart of a control method of a cleaning robot in one embodiment;
FIG. 5 is a flow chart illustrating a method of determining a shutter according to an embodiment;
FIG. 6 is a schematic view of a control method of a cleaning robot in one embodiment;
FIG. 7 is a block diagram showing a control apparatus of a cleaning robot in one embodiment;
FIG. 8 is a schematic view of a cleaning robot in one embodiment;
FIG. 9 is a schematic view of a structure of a cleaning robot in one embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the disclosed embodiments and are not intended to limit the disclosed embodiments.
In one embodiment, as shown in fig. 1, there is provided a control method of a cleaning robot, the method including:
step S110, a target area corresponding to a cleaning instruction is obtained, wherein the target area comprises a target position to be cleaned;
in the embodiment of the disclosure, a cleaning instruction is acquired, and a target area where a target position to be cleaned is located is determined according to the cleaning instruction, wherein the cleaning instruction may include an instruction of target area information. In one example, the cleaning instruction may include, but is not limited to, a voice cleaning instruction of a user, and a control instruction sent by the user through an electronic device terminal such as a mobile phone, where when the cleaning instruction is the voice cleaning instruction of the user, semantic information in the voice cleaning instruction may be identified through a semantic identification technology, so as to obtain the target area information. Typically, the target location to be cleaned is a small area or location point and the target area is a large area, such as a room, that includes the target location. In one example, the working area of the cleaning robot may be divided in advance according to an actual application scenario to obtain a plurality of sub-areas, where the target area is one of the sub-areas.
Step S120, controlling the cleaning robot to travel from the current position to the target area;
in the embodiment of the disclosure, after the target area is determined, the cleaning robot is controlled to travel from the current position to the target area. In one example, when the cleaning robot is controlled to travel to the target area, in a case where the cleaning robot is located at or within a preset distance from a boundary line of the target area, it may be considered that the cleaning robot has reached the target area at this time; in another example, a center position may be set for each region in advance, and when the cleaning robot reaches within a preset range of the center position of the target region, the cleaning robot may be considered to have reached the target region at this time. In one example, in a case where the current position of the cleaning robot is located within the target area, the cleaning robot may be controlled not to move.
And step S130, identifying the target position in the target area, controlling the cleaning robot to travel to the target position, and cleaning a preset range area of the target position.
In the embodiment of the disclosure, after the cleaning robot reaches the target area, the target position is identified in the target area, and in general, the cleaning robot identifies in the target area according to the acquired information, so as to obtain the information of the target position. In one example, the information acquired by the cleaning robot may include image information acquired by the vision sensor assembly, may include radar information acquired by the laser radar assembly, may include instruction information sent by a user through an electronic device terminal such as a mobile phone, and may also include voice cleaning instruction information sent by the user. The target location may include, among other things, a location where a stain is present. In one possible implementation manner, when the cleaning robot determines the target position according to the image information, after the cleaning robot reaches the target area, the vision sensor assembly obtains the image information in the target area, and compares and analyzes the obtained image information with the image information of the target area in the cleaning state without cleaning, so as to determine the position needing cleaning, namely the target position. In another possible implementation manner, when the cleaning robot determines the target position according to the radar information, after the cleaning robot reaches the target area, the radar information in the target area is acquired through the laser radar component, and the acquired radar information is compared with the radar information of the target area in a cleaning state without cleaning, which is acquired in advance, so that the position needing cleaning is determined, namely the target position. In another example, after the image data or the radar data is obtained, the target position is determined according to a preset stain recognition model, wherein the preset stain recognition model can be obtained through training based on a deep learning algorithm or based on the image recognition algorithm, and the method for obtaining the stain recognition model is not limited in the disclosure. And controlling the cleaning robot to travel to the target position and cleaning a preset range area of the target position, wherein the preset range is usually a proper cleaning range which is obtained by setting in advance according to an actual application scene.
According to the embodiment of the disclosure, when the cleaning robot needs to be controlled to clean the target position, a cleaning instruction is firstly acquired, a target area comprising the target position to be cleaned is acquired according to the cleaning instruction, the cleaning robot is controlled to travel to the target area, the target position is determined in the target area, the cleaning robot is controlled to travel to the target position, and cleaning is performed in a preset range area of the target position, so that fixed-point cleaning can be realized; the method comprises the steps of firstly determining a target area corresponding to the target position, determining the target position after the target area is reached, and then cleaning the target position, so that the problem that the cleaning robot cannot accurately acquire the target position due to the fact that the distance is long, a shielding object exists and other factors is avoided, accurate positioning during fixed-point cleaning is guaranteed, and user experience is improved.
It is noted that in one of the embodiments, the cleaning assembly of the cleaning robot may be in a lifted state, for example, a state in which the cleaning assembly is at least partially out of contact with the surface of the work area, during at least one of controlling the cleaning robot to travel from the current position to the target area and controlling the cleaning robot to travel to the target position. In other words, the cleaning assembly of the cleaning robot is not operated or is in a non-operated state in at least one process of controlling the cleaning robot to travel from the current position to the target area and controlling the cleaning robot to travel to the target position.
In one embodiment, the identifying, in the target area, the target location where the stain area is located includes:
and acquiring a first voice cleaning instruction, and determining the target position as the sound source position of the first voice cleaning instruction.
In the embodiment of the disclosure, when the target position is determined, a first voice cleaning instruction is acquired, a sound source position of the first voice cleaning instruction is determined, and the sound source position is determined as the target position to be cleaned. In this embodiment, the target position point of the cleaning robot for fixed-point cleaning is the sound source position of the voice cleaning instruction, that is, the position where the user sends the voice cleaning instruction is the target position, where the user needs to send the first voice cleaning instruction at the position to be cleaned, so that the cleaning robot can locate the target position to be cleaned according to the sound source position. In one example, the sound source position may be determined for a time delay acquired through a microphone, and a localization method based on a sound pressure amplitude ratio may also be used to obtain the sound source position by using the sound pressure amplitude differences of sound signals received by different microphones from the same sound source. In one example, the sound source location of the first voice cleaning instruction may be located according to voice information of the first voice cleaning instruction, such as angular resolution, etc., through a sound source location technique. In one example, after the cleaning robot reaches the target area, the user sends a first voice cleaning instruction within the target area, wherein the user sending the cleaning instruction may or may not move before the cleaning robot reaches the target area.
In one example, the cleaning instruction in step S110 includes a second voice cleaning instruction, and in one possible implementation manner, the second voice cleaning instruction may be directly used as the first voice cleaning instruction, for example, after receiving the second voice cleaning instruction, the cleaning robot determines whether the sound source position of the second voice cleaning instruction can be accurately positioned according to the second voice cleaning instruction, where the determination basis may include, but is not limited to, strength and angular resolution of the second voice cleaning instruction, and sound pressure amplitude ratio of the second voice cleaning instruction; when the judgment result is that the sound source position can be accurately positioned, the second voice cleaning instruction can be directly used as the first voice cleaning instruction, the cleaning robot is controlled to travel to the target position, and the preset area range of the target position is cleaned. In another example, the cleaning instruction in step S110 includes a second voice cleaning instruction, and in another possible implementation, the first voice cleaning instruction and the second voice cleaning instruction are two instructions, for example, a case where a distance between a sound source position of the second voice cleaning instruction and a cleaning robot position is far, a case where a shelter exists between the sound source position of the second voice cleaning instruction and the cleaning robot position, and the like.
According to the embodiment of the disclosure, when the target position is identified, the first voice cleaning instruction is acquired, and the sound source position of the first voice cleaning instruction is determined to be the target position to be cleaned, so that accurate fixed-point cleaning can be realized according to the voice instruction of the user; the problem that the cleaning robot cannot accurately acquire the target position due to the fact that the distance is far and the shielding objects exist is avoided, accurate positioning during fixed-point cleaning is guaranteed, and experience of a user is improved.
In one embodiment, as shown in fig. 2, the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
step S111, a second voice cleaning instruction is acquired;
step S112, identifying the second voice cleaning instruction to obtain identification information;
step S113, according to the association relation between the identification information and the target area, the target area matched with the identification information is determined.
In an embodiment of the disclosure, the cleaning instruction includes a second voice cleaning instruction, and the cleaning robot determines the corresponding target area according to the second voice cleaning instruction. The method comprises the steps of obtaining a second voice cleaning instruction, and carrying out semantic recognition on voice information contained in the second voice cleaning instruction to obtain identification information, wherein the identification information can be obtained through recognition of a semantic recognition module built in a cleaning robot. In this embodiment, there may be an association between the identification information and the cleaning areas, and in general, each cleaning area corresponds to one type of identification information. And determining the target area matched with the identification information according to the association relation between the identification information and the target area. In one example, the identification information may include an area name, a room number, an identification code, etc. of the cleaning area, different area names may be set in advance for different areas, and a correspondence relationship between the cleaning area and the area names is stored in the control unit of the cleaning robot. In one example, as shown in fig. 3, the cleaning area corresponds to different identification information "A, B, C, D, E", when the user wants to perform fixed-point cleaning in the D area, first, a second voice cleaning instruction is sent, at this time, the cleaning robot receives the second voice cleaning instruction, and performs voice recognition according to the voice information, so that the target area is the D area, as shown in fig. 4.
According to the embodiment of the disclosure, the cleaning instruction can comprise a second voice cleaning instruction, the target area is determined and obtained according to the association relation between the identification information in the second voice cleaning instruction and the target area, and the target area is determined according to the voice cleaning instruction of the user, so that coarse positioning during fixed-point cleaning of the cleaning robot is realized, subsequent accurate positioning is ensured, and fixed-point cleaning is realized; the target area can be accurately and quickly determined through the voice cleaning instruction, and the experience of a user is improved.
In one embodiment, the identifying the second voice cleaning instruction, to obtain the identification information, includes:
and under the condition that a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot, identifying the second voice cleaning instruction to obtain identification information.
In the embodiment of the disclosure, the cleaning instruction includes a second voice cleaning instruction, the second voice cleaning instruction is acquired, and whether a shielding object exists between a sound source position of the second voice cleaning instruction and a current position of the cleaning robot is determined according to the second voice cleaning instruction. When a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot, the cleaning robot can be considered to be incapable of accurately positioning the target position to be cleaned directly through the second voice cleaning instruction. At this time, the voice information contained in the second voice cleaning instruction is subjected to semantic recognition to obtain the identification information, wherein the identification information can be obtained through recognition of the semantic recognition module built in the cleaning robot. In one example, as shown in fig. 3, the cleaning area corresponds to different identification information "A, B, C, D, E", when the user wants to perform fixed-point cleaning in the D area, first, a second voice cleaning instruction is sent, at this time, the cleaning robot receives the second voice cleaning instruction, determines that there is a shielding from the current position, performs voice recognition according to the voice information, and obtains that the target area is the D area, as shown in fig. 4.
According to the embodiment of the disclosure, when the cleaning instruction comprises the second voice cleaning instruction, when the shielding exists between the sound source position of the second voice cleaning instruction and the robot position, the second voice cleaning instruction is subjected to semantic recognition to obtain the target area, so that a shielding object can exist between the cleaning robot and the target position to be cleaned, rough positioning is performed under the condition that the cleaning robot cannot be accurately positioned, further accurate positioning is performed, and the problem that the cleaning cannot be accurately fixed-point cleaning due to the shielding object exists between the cleaning robot and the target position to be cleaned is avoided.
In one embodiment, the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
acquiring a second voice cleaning instruction;
determining an area where the sound source position of the second voice cleaning instruction is located as a target area under the condition that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot;
the acquiring the first voice cleaning instruction, determining the target position as the sound source position of the first voice cleaning instruction, includes:
And taking the second voice cleaning instruction as a first voice cleaning instruction, and determining the target position as the sound source position of the second voice cleaning instruction.
In the embodiment of the disclosure, the cleaning instruction includes a second voice cleaning instruction, the second voice cleaning instruction is acquired, and whether a shielding object exists between a sound source position of the second voice cleaning instruction and a current position of the cleaning robot is determined according to the second voice cleaning instruction. When judging that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot, determining the area where the sound source position of the second voice cleaning instruction is located as a target area, taking the second voice cleaning instruction as a first voice cleaning instruction, and taking the sound source position of the second voice cleaning instruction as a target position to be cleaned. The judging whether the shielding object exists or not can be judged through the intensity, the angle resolution and the like of the second voice cleaning instruction. In this embodiment, since it is determined that no shielding object exists, the cleaning robot may accurately position the sound source position, may directly use the sound source position of the second voice cleaning instruction as the target position to be cleaned, and control the cleaning robot to reach the target position for cleaning, and it may be understood that at this time, the fixed-point cleaning of the cleaning robot may be achieved without performing semantic recognition on the second voice cleaning instruction or ignoring semantic information.
According to the embodiment of the disclosure, when the cleaning instruction comprises the second voice cleaning instruction, and no shielding object exists between the sound source position of the second voice cleaning instruction and the robot position, the sound source position of the second voice cleaning instruction is directly used as the target position to be cleaned, the robot is controlled to clean, accurate fixed-point cleaning can be directly realized under the condition that no shielding object exists, coarse positioning is not needed, accurate positioning is realized while the fixed-point cleaning process is simplified, and user experience is further improved.
In one embodiment, as shown in fig. 5, the determining manner of whether the obstruction exists includes:
step S510, acquiring the signal intensity and/or the angular resolution of the second voice cleaning command;
step S510, determining whether a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot according to the signal intensity and/or the angle resolution.
In the embodiment of the disclosure, when judging whether a shielding object exists between the second voice cleaning instruction and the cleaning robot, the signal intensity and/or the angle resolution of the second voice cleaning instruction may be obtained. Wherein, the signal intensity refers to the sound intensity of the second voice cleaning instruction, and the angular resolution refers to the angular resolution of the sound signal of the second voice cleaning instruction. And determining whether a shielding object exists between the sound source position and the current position of the cleaning robot according to the acquired sound intensity and/or the acquired angular resolution. Normally, when no shielding object exists, the signal intensity is high, and the angular resolution is high; when a shielding object exists, the signal intensity is small, and the angular resolution is low.
According to the embodiment of the disclosure, whether the shielding object exists or not is determined through the signal intensity and/or the angle resolution of the second voice cleaning instruction, so that a relatively accurate judgment result can be obtained, corresponding operation is executed according to the judgment result, the process is saved, accurate fixed-point cleaning of the cleaning robot is ensured, and the experience of a user is improved.
In one embodiment, the controlling the cleaning robot to travel from the current position to the target area includes:
acquiring the current position of the cleaning robot and a preset working map, wherein the working map comprises the target area;
determining a driving path from the current position to the target area according to the working map;
and controlling the cleaning robot to travel to the target area according to the travel path.
In the embodiment of the disclosure, when the cleaning robot is controlled to travel to the target area, the current position of the cleaning robot and a preset working map are firstly obtained. The preset working map is usually a map determined in advance according to a working area of the cleaning robot, and the working map includes a target area. And determining a driving path of the cleaning robot from the current position to the target area according to the current position of the cleaning robot and the position of the target area. Typically, the travel path is obtained by a preset path planning algorithm, and in one example, the planned travel path is the shortest path without any obstacle from the current position to the target area. After the travel path is obtained, the cleaning robot is controlled to travel to the target area according to the travel path. In one example, as shown in fig. 6, the target area is determined as a D area, and the cleaning robot is controlled to travel a planned travel path from the current position to the D area.
According to the embodiment of the disclosure, after the target area is determined, the running path of the cleaning robot is determined according to the preset working map, and the cleaning robot is controlled to run to the target area according to the running path, so that the cleaning robot is controlled to run to the target area according to coarse positioning, and further accurate fixed-point cleaning can be ensured.
In one embodiment, the method for obtaining the working map includes:
acquiring an initial working map;
and carrying out area division on the initial working map to obtain a working map comprising a plurality of areas, wherein each area in the plurality of areas corresponds to identification information.
In the embodiment of the disclosure, an initial working map is first obtained, where in the embodiment, the initial working map may be regarded as a map of a working area of a complete cleaning robot that is not subjected to area division, and at this time, the initial working map may be regarded as including only one area. And carrying out region division on the initial working map to obtain a working map corresponding to a plurality of regions. In one example, when the initial working map is divided, the regions may be regularly divided according to a preset division algorithm, so as to obtain a working map corresponding to a plurality of regions; and the working map corresponding to a plurality of areas can be obtained by dividing the working map for the human according to the actual application scene. When the cleaning robot is used for cleaning, each area is correspondingly provided with identification information, such as an area label, an area name and the like, so that the cleaning robot can conveniently determine a target area according to the identification information in rough positioning of fixed-point cleaning. In one example, to achieve accurate positioning of cleaning robot fixed point cleaning, region division may be performed based on a range in which the cleaning robot sound source can be accurately positioned at the time of positioning; and the area division can be performed by taking the fact that the number of the shielding objects in each divided area is smaller than the preset number as a standard. In one example, the working map may be constructed through lidar, vision, or a combination of both.
According to the method and the device for cleaning the sound source, the initial working map is divided into the working maps of the plurality of areas, the identification information is corresponding to each area, so that when the cleaning robot performs fixed-point cleaning, coarse positioning is performed first under the condition that the sound source position cannot be accurately positioned, accurate positioning is further achieved, accurate fixed-point cleaning can be achieved, and user experience is improved.
In one embodiment, the cleaning robot comprises a sound source positioning system and a semantic recognition system which are embedded in the cleaning robot, in the actual fixed-point cleaning process, a user sends out a voice command, the cleaning robot judges whether the sound source position of the voice command and the cleaning robot are shielded by the sound source positioning technology, if so, the cleaning robot recognizes that the voice command reaches a target area where the sound source is located by the semantic recognition technology, then the user sends out the voice command again, the cleaning robot accurately positions the position where the sound source is located by the sound source positioning technology, and the cleaning robot runs to the position where the sound source is located for fixed-point cleaning; if no shielding is judged, semantic recognition is not needed, the position of the sound source is precisely positioned directly through the sound source positioning technology, and the vehicle runs to the position of the sound source for fixed-point cleaning. Through the embodiment, through combining the sound source recognition technology and the semantic recognition technology, the problem that the positioning error is large when the sound source positioning technology encounters shielding is solved, and accurate fixed-point cleaning is realized.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in the figures may include steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
Based on the same inventive concept, the embodiments of the present disclosure also provide a control device of a cleaning robot for implementing the above-mentioned control method of the cleaning robot. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the control device of one or more cleaning robots provided below may be referred to the limitation of the control method of the cleaning robot hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 7, there is provided a control device 700 of a cleaning robot, including:
an obtaining module 710, configured to obtain a target area corresponding to a cleaning instruction, where the target area includes a target position to be cleaned;
a control module 720 for controlling the cleaning robot to travel from the current position to the target area; and the cleaning robot is used for identifying the target position in the target area, controlling the cleaning robot to travel to the target position and cleaning a preset range area of the target position.
In one embodiment, the control module includes:
the first acquisition sub-module is used for acquiring a first voice cleaning instruction and determining the target position as the sound source position of the first voice cleaning instruction.
In one embodiment, the cleaning instructions include second voice cleaning instructions, and the acquisition module includes:
the second acquisition sub-module is used for acquiring a second voice cleaning instruction;
the identification module is used for identifying the second voice cleaning instruction to obtain identification information;
and the first determining submodule is used for determining the target area matched with the identification information according to the association relation between the identification information and the target area.
In one embodiment, the identification module comprises:
and the identification sub-module is used for identifying the second voice cleaning instruction to obtain identification information under the condition that a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot.
In one embodiment, the cleaning instructions include second voice cleaning instructions, and the first acquisition sub-module includes:
the third acquisition sub-module is used for acquiring a second voice cleaning instruction;
the second determining submodule is used for determining an area where the sound source position of the second voice cleaning instruction is located as a target area under the condition that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot;
the control module comprises:
and the third determining submodule is used for taking the second voice cleaning instruction as a first voice cleaning instruction and determining the target position as the sound source position of the second voice cleaning instruction.
In one embodiment, the determining module for whether the obstruction is present comprises:
a fourth obtaining sub-module, configured to obtain signal strength and/or angular resolution of the second voice cleaning instruction;
And the fourth determining submodule is used for determining whether a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot according to the signal intensity and/or the angle resolution.
In one embodiment, the control module includes:
a fifth obtaining sub-module, configured to obtain a current position of the cleaning robot and a preset working map, where the working map includes the target area;
a fifth determining submodule, configured to determine a travel path from the current position to the target area according to the working map;
and the control sub-module is used for controlling the cleaning robot to travel to the target area according to the travel path.
In one embodiment, the obtaining module of the working map includes:
a sixth acquisition sub-module, configured to acquire an initial working map;
the dividing module is used for dividing the initial working map into areas to obtain a working map comprising a plurality of areas, wherein each area in the plurality of areas corresponds to identification information.
The respective modules in the control device of the cleaning robot described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, there is provided a cleaning robot including:
a body;
the moving assembly is arranged on the machine body and used for driving the machine body to move;
the cleaning component is arranged on the machine body and is used for executing a cleaning task according to the set cleaning parameters;
a memory for storing instructions executable by the processor;
the processor comprises an instruction identification unit, is arranged on the machine body, is electrically connected with the moving assembly, the cleaning assembly and the memory, and is used for realizing the control method of the cleaning robot in any one of the embodiments of the disclosure when executing the instruction.
In one embodiment, the cleaning robot further comprises a sound source positioning assembly disposed on the body for receiving a voice cleaning instruction.
In one embodiment, the instruction recognition unit includes a semantic recognition unit.
In one embodiment, the cleaning robot further comprises at least one of the following components:
the visual sensor assembly is arranged on the machine body, is electrically connected with the processor and is used for acquiring image data of a preset range of the position of the cleaning robot;
and the laser radar component is arranged on the machine body, is electrically connected with the processor and is used for acquiring laser point cloud data of a preset range of the position of the cleaning robot.
Fig. 8 is a block diagram of a cleaning robot according to an exemplary embodiment, fig. 9 is a top view of a block diagram of a cleaning robot according to an exemplary embodiment, a vision sensor assembly may be installed at a front end of a body to facilitate image collection, a laser radar assembly may be installed at an upper end of the body to obtain an outline of a house interior, and a sound source localization assembly and a semantic recognition unit may be embedded inside the body, as shown with reference to fig. 8 and 9. In one example, the sound source positioning component may be a microphone array, for example, a board with the microphone array is placed under the cover of the laser radar component, where four microphones may be included, the microphone placement positions are different, the time for receiving the sound is different, and the direction of the sound can be calculated according to the difference of the time for receiving the sound at the different positions, so as to position the sound source.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as a working map of the cleaning robot. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a method of controlling a cleaning robot.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of a portion of the structure associated with an embodiment of the present disclosure and is not limiting of the computer device to which an embodiment of the present disclosure is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) according to the embodiments of the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided by the present disclosure may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided by the present disclosure may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided in the present disclosure may be general-purpose processors, central processing units, graphic processors, digital signal processors, programmable logic units, data processing logic units based on quantum computing, and the like, without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few implementations of the disclosed embodiments, which are described in more detail and are not to be construed as limiting the scope of the disclosed embodiments. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made to the disclosed embodiments without departing from the spirit of the disclosed embodiments. Accordingly, the scope of the disclosed embodiments should be determined from the following claims.

Claims (15)

1. A control method of a cleaning robot, the method comprising:
acquiring a target area corresponding to a cleaning instruction, wherein the target area comprises a target position to be cleaned;
controlling the cleaning robot to travel from the current position to the target area;
and identifying the target position in the target area, controlling the cleaning robot to travel to the target position, and cleaning a preset range area of the target position.
2. The method of claim 1, wherein identifying the target location of the stained area within the target area comprises:
and acquiring a first voice cleaning instruction, and determining the target position as the sound source position of the first voice cleaning instruction.
3. The method of claim 1, wherein the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
acquiring a second voice cleaning instruction;
identifying the second voice cleaning instruction to obtain identification information;
and determining a target area matched with the identification information according to the association relation between the identification information and the target area.
4. The method of claim 3, wherein said identifying said second voice cleaning instruction, resulting in identification information, comprises:
and under the condition that a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot, identifying the second voice cleaning instruction to obtain identification information.
5. The method of claim 2, wherein the cleaning instruction includes a second voice cleaning instruction, and the acquiring the target area corresponding to the cleaning instruction includes:
Acquiring a second voice cleaning instruction;
determining an area where the sound source position of the second voice cleaning instruction is located as a target area under the condition that no shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot;
the acquiring the first voice cleaning instruction, determining the target position as the sound source position of the first voice cleaning instruction, includes:
and taking the second voice cleaning instruction as a first voice cleaning instruction, and determining the target position as the sound source position of the second voice cleaning instruction.
6. The method according to claim 4 or 5, wherein the determination of whether the obstruction is present comprises:
acquiring the signal intensity and/or the angular resolution of the second voice cleaning instruction;
and determining whether a shielding object exists between the sound source position of the second voice cleaning instruction and the current position of the cleaning robot according to the signal intensity and/or the angle resolution.
7. The method of claim 1, wherein the controlling the cleaning robot to travel from the current location to the target area comprises:
acquiring the current position of the cleaning robot and a preset working map, wherein the working map comprises the target area;
Determining a driving path from the current position to the target area according to the working map;
and controlling the cleaning robot to travel to the target area according to the travel path.
8. The method of claim 7, wherein the working map is obtained by:
acquiring an initial working map;
and carrying out area division on the initial working map to obtain a working map comprising a plurality of areas, wherein each area in the plurality of areas corresponds to identification information.
9. A control device of a cleaning robot, the device comprising:
the acquisition module is used for acquiring a target area corresponding to the cleaning instruction, wherein the target area comprises a target position to be cleaned;
the control module is used for controlling the cleaning robot to travel from the current position to the target area; and the cleaning robot is used for identifying the target position in the target area, controlling the cleaning robot to travel to the target position and cleaning a preset range area of the target position.
10. A cleaning robot, comprising:
a body;
the moving assembly is arranged on the machine body and used for driving the machine body to move;
The cleaning component is arranged on the machine body and is used for executing a cleaning task according to the set cleaning parameters;
a memory for storing instructions executable by the processor;
a processor, including an instruction recognition unit, disposed on the body, electrically connected to the moving assembly, the cleaning assembly, and the memory, for implementing the control method of the cleaning robot according to any one of claims 1 to 8 when executing the instruction.
11. The cleaning robot of claim 10, further comprising:
the sound source positioning assembly is arranged on the machine body and is used for receiving the voice cleaning instruction.
12. The cleaning robot of claim 10, wherein the instruction recognition unit includes a semantic recognition unit.
13. The cleaning robot of claim 10, further comprising at least one of the following components:
the visual sensor assembly is arranged on the machine body, is electrically connected with the processor and is used for acquiring image data of a preset range of the position of the cleaning robot;
and the laser radar component is arranged on the machine body, is electrically connected with the processor and is used for acquiring laser point cloud data of a preset range of the position of the cleaning robot.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, carries out the steps of the method of controlling a cleaning robot according to any one of claims 1 to 8.
15. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, realizes the steps of the control method of a cleaning robot as claimed in any one of claims 1 to 8.
CN202211157471.4A 2022-09-22 2022-09-22 Control method and device of cleaning robot and cleaning robot Pending CN117770713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211157471.4A CN117770713A (en) 2022-09-22 2022-09-22 Control method and device of cleaning robot and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211157471.4A CN117770713A (en) 2022-09-22 2022-09-22 Control method and device of cleaning robot and cleaning robot

Publications (1)

Publication Number Publication Date
CN117770713A true CN117770713A (en) 2024-03-29

Family

ID=90382359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211157471.4A Pending CN117770713A (en) 2022-09-22 2022-09-22 Control method and device of cleaning robot and cleaning robot

Country Status (1)

Country Link
CN (1) CN117770713A (en)

Similar Documents

Publication Publication Date Title
Mishra et al. ROS based service robot platform
CN109682368B (en) Robot, map construction method, positioning method, electronic device and storage medium
US10497145B2 (en) System and method for real-time large image homography processing
Kumar et al. Visual memory for robust path following
CN109645892B (en) Obstacle identification method and cleaning robot
CN110968083B (en) Method for constructing grid map, method, device and medium for avoiding obstacles
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
US20200064827A1 (en) Self-driving mobile robots using human-robot interactions
US20200233061A1 (en) Method and system for creating an inverse sensor model and method for detecting obstacles
TWI702376B (en) Correspondence establishment method, device, medium and electronic equipment
US20200300639A1 (en) Mobile robots to generate reference maps for localization
CN113907663B (en) Obstacle map construction method, cleaning robot, and storage medium
US11562524B2 (en) Mobile robots to generate occupancy maps
CN111015656A (en) Control method and device for robot to actively avoid obstacle and storage medium
CN107745711B (en) Method and device for determining route in automatic driving mode
US11958579B2 (en) System and method for autonomous exploration for mapping underwater environments
CN110245567B (en) Obstacle avoidance method and device, storage medium and electronic equipment
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN115494834A (en) Robot path planning method and device and robot
CN112614184A (en) Object 6D attitude estimation method and device based on 2D detection and computer equipment
CN114001728A (en) Control method and device for mobile robot, storage medium and electronic equipment
CN117770713A (en) Control method and device of cleaning robot and cleaning robot
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination