CN117941991A - Control method and device of cleaning equipment, storage medium and electronic device - Google Patents

Control method and device of cleaning equipment, storage medium and electronic device Download PDF

Info

Publication number
CN117941991A
CN117941991A CN202211277621.5A CN202211277621A CN117941991A CN 117941991 A CN117941991 A CN 117941991A CN 202211277621 A CN202211277621 A CN 202211277621A CN 117941991 A CN117941991 A CN 117941991A
Authority
CN
China
Prior art keywords
target object
voice
information
cleaning
cleaning device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211277621.5A
Other languages
Chinese (zh)
Inventor
耿文峰
孙佳佳
朱晨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202211277621.5A priority Critical patent/CN117941991A/en
Publication of CN117941991A publication Critical patent/CN117941991A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a control method and a device of cleaning equipment, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring voice control data of a target object, wherein the voice control data are used for controlling the cleaning equipment to clean a position area where the target object is located; positioning the target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object; controlling the cleaning equipment to turn to the target object according to the first orientation information; and controlling the cleaning equipment to move towards the target object, and cleaning the area of the position area where the target object is located. By adopting the technical scheme, the flexibility of cleaning the area can be improved.

Description

Control method and device of cleaning equipment, storage medium and electronic device
[ Field of technology ]
The application relates to the field of intelligent home, in particular to a control method and device of cleaning equipment, a storage medium and an electronic device.
[ Background Art ]
Currently, a user can set a cleaning area, a cleaning time, etc. of the cleaning apparatus through an application terminal matched with the cleaning apparatus. When the cleaning device is required to clean an area near the position of the cleaning device, the user needs to set a cleaning area of the cleaning device on an area map of the application end and call the cleaning device to move to the set cleaning area for cleaning.
However, in the control manner of the cleaning device, a cleaning area needs to be planned according to the position of the user in the area map at the application end, the cleaning area is not a regular area, the area planning process is complex, and the area cleaning flexibility is poor. As can be seen, the control method of the cleaning apparatus in the related art has a problem in that the flexibility of the area cleaning is poor due to the complicated process of the area planning.
[ Invention ]
The application aims to provide a control method and device of cleaning equipment, a storage medium and an electronic device, so as to at least solve the problem that the control method of the cleaning equipment in the related art has poor flexibility of cleaning areas due to complex process of area planning.
The application aims at realizing the following technical scheme:
According to an aspect of an embodiment of the present application, there is provided a control method of a cleaning apparatus, including: acquiring voice control data of a target object, wherein the voice control data are used for controlling the cleaning equipment to clean a position area where the target object is located; positioning the target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object; controlling the cleaning equipment to turn to the target object according to the first orientation information; and controlling the cleaning equipment to move towards the target object, and cleaning the area of the position area where the target object is located.
According to another aspect of the embodiment of the present application, there is also provided a control device of a cleaning apparatus, including: an acquisition unit configured to acquire voice control data of a target object, where the voice control data is configured to control the cleaning device to perform area cleaning on a location area where the target object is located; the positioning unit is used for positioning the target object according to the voice control data and the environment information of the cleaning equipment to obtain first orientation information of the target object; a first control unit for controlling the cleaning device to turn to the target object according to the first direction information; and the first execution unit is used for controlling the cleaning equipment to move towards the target object and cleaning the area of the position area where the target object is located.
In an exemplary embodiment, the first determining module is configured to determine, according to environmental information in which the cleaning device is located, voice interference information corresponding to the voice control data; the correction module is used for correcting the voice control data by using the voice interference information to obtain corrected voice control data; the extraction module is used for extracting the voice characteristics of the corrected voice control data to obtain target voice characteristics; and the positioning module is used for positioning the sound source according to the target voice characteristics to obtain the first direction information.
In one exemplary embodiment, the correction module includes: and the execution sub-module is used for executing echo cancellation operation on the voice control data by using echo information corresponding to the voice control data to obtain corrected voice control data, wherein the voice interference information comprises the echo information corresponding to the voice control data.
In one exemplary embodiment, the first determining module includes: and the first determining submodule is used for determining voice interference information corresponding to the voice control data according to the barrier information in the position area of the cleaning equipment and the wall information in the position area of the cleaning equipment, wherein the environment information of the cleaning equipment comprises the barrier information and the wall information.
In one exemplary embodiment, the corrected voice control data includes multiple paths of voice data in one-to-one correspondence with multiple paths of microphones in a microphone array of the cleaning device; the extraction module comprises: the extraction sub-module is used for extracting voice characteristics of each path of voice data in the multipath voice data respectively to obtain voice characteristics corresponding to each path of voice data; and the second determining submodule is used for determining the receiving time difference of any two paths of voice data in the multipath voice data according to the receiving time of each path of voice data, wherein the target voice characteristic comprises the voice characteristic corresponding to each path of voice data and the receiving time difference of any two paths of voice data.
In an exemplary embodiment, the apparatus further comprises: the first acquisition unit is used for acquiring images through an image acquisition component on the cleaning equipment before the cleaning equipment is controlled to move towards the target object to obtain a first acquisition image, and acquiring point cloud data through a point cloud acquisition component to obtain object point cloud data of the target object; the first identification unit is used for carrying out object identification on the first acquired image to obtain second azimuth information of the target object; the second identification unit is used for carrying out object identification on the object point cloud data to obtain the reference azimuth information of the target object; and the correction unit is used for correcting the second azimuth information by using the reference azimuth information to obtain corrected second azimuth information, wherein the corrected second azimuth information is used for controlling the cleaning equipment to move towards the target object.
In an exemplary embodiment, the first execution unit includes: the second determining module is used for determining a target angle to be rotated by the cleaning equipment according to the first direction information, wherein the target angle is a relative angle between the target object and the cleaning equipment; and the control module is used for controlling the cleaning equipment to rotate the target angle so as to turn the cleaning equipment to the target object.
In one exemplary embodiment, the second determining module includes: a third determining sub-module, configured to determine a first rotation angle corresponding to a clockwise direction according to the first direction information, where the first rotation angle is an angle at which the cleaning apparatus needs to rotate in the clockwise direction to turn the target object; a fourth determining sub-module, configured to determine a second rotation angle corresponding to a counterclockwise direction according to the first direction information, where the second rotation angle is an angle at which the cleaning apparatus is required to rotate in the counterclockwise direction to turn the target object; and a fifth determining sub-module configured to determine the smaller angle of the first rotation angle and the second rotation angle as the target angle.
In an exemplary embodiment, the apparatus further comprises: the second acquisition unit is used for acquiring images through an image acquisition component on the cleaning equipment after the cleaning equipment is controlled to rotate by the target angle, so as to obtain a second acquisition image; and a second control unit for controlling the cleaning device to continuously rotate along a preset direction until the target object is identified in the image acquired by the image acquisition component under the condition that the target object is not identified in the second acquired image.
In an exemplary embodiment, the apparatus further comprises: the detection unit is used for carrying out image acquisition through the image acquisition component on the cleaning equipment to obtain a second acquired image, and then carrying out human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, wherein the human-shaped object detection result is used for indicating the human-shaped object identified from the second acquired image; a first determining unit configured to determine that the target object is identified from the second captured image, in a case where the human-shaped object detection result indicates that the human-shaped object is identified from the second captured image; and a second determining unit configured to determine that the target object is not recognized from the second captured image, in a case where the human-shaped object detection result indicates that the human-shaped object is not recognized from the second captured image.
In an exemplary embodiment, the apparatus further comprises: a third determining unit, configured to determine, after the performing human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, an object position corresponding to each of the plurality of human-shaped objects if the human-shaped object detection result indicates that the plurality of human-shaped objects are identified from the second acquired image; and a fourth determining unit, configured to determine, as the target object, a humanoid object whose object azimuth matches with azimuth information of the target object, from among the plurality of humanoid objects.
In an exemplary embodiment, the apparatus further comprises: and a third control unit configured to control the cleaning apparatus to stop moving toward the target object and control the cleaning apparatus to perform a cleaning operation until an object to be cleaned is not detected within a detection range of a target detection part of the cleaning apparatus, in a case where a distance between the cleaning apparatus and the target object is detected to be less than or equal to a preset distance threshold after the control of the cleaning apparatus to move toward the target object.
According to a further aspect of the embodiments of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the control method of the cleaning apparatus described above when run.
According to still another aspect of the embodiments of the present application, there is further provided an electronic apparatus including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the control method of the cleaning device described above through the computer program.
In the embodiment of the application, voice control data of a target object is obtained by adopting a mode of carrying out sound source positioning based on voice control data and environment information and controlling a cleaning device to move to a sound source position for carrying out area cleaning, wherein the voice control data is used for controlling the cleaning device to carry out area cleaning on a position area where the target object is positioned; positioning a target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object; controlling the cleaning device to turn to the target object according to the first orientation information; and controlling the cleaning equipment to move towards the target object, and cleaning the area of the position where the target object is located. Because the cleaning equipment is directly called through the voice to clean the area, the area planning is not required in advance, and meanwhile, the environmental information of the cleaning equipment is combined when the sound source positioning is carried out, the influence of environmental factors on the sound source positioning can be avoided, the accuracy of the sound source positioning is improved, the purposes of reducing the complexity of the area planning process while improving the accuracy of the positioning of the area to be cleaned can be realized, and the technical effect of improving the flexibility of the area cleaning is achieved.
[ Description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an alternative method of controlling a cleaning device in accordance with an embodiment of the application;
FIG. 2 is a flow chart of an alternative method of controlling a cleaning device according to an embodiment of the application;
FIG. 3 is a flow chart of another alternative method of controlling a cleaning device according to an embodiment of the application;
FIG. 4 is a block diagram of the control of an alternative cleaning apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the application.
[ Detailed description ] of the invention
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
According to one aspect of an embodiment of the present application, a control method of a cleaning apparatus is provided. Alternatively, in the present embodiment, the control method of the cleaning apparatus described above may be applied to a hardware environment constituted by the cleaning apparatus 102, the base station 104, and the server 106 as shown in fig. 1. As shown in fig. 1, the cleaning device 102 may be connected to the base station 104 and/or the server 106 (e.g., a voice cloud platform) over a network to enable interaction between the cleaning device 102 and the base station 104 and/or the server 106.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (WIRELESS FIDELITY ), bluetooth, infrared. The network used by the cleaning device 102 to communicate with the base station 104 and/or the server 106 may be the same as or different from the network used by the base station 104 to communicate with the server 106. The cleaning device 102 may include, but is not limited to: a sweeping robot (simply called a sweeping machine), a floor washing robot (simply called a floor washing machine), a robot integrating washing and sweeping, a self-cleaning robot and the like.
The control method of the cleaning device according to the embodiment of the present application may be executed by the cleaning device 102 and the server 106 separately, or may be executed by the cleaning device 102 and the server 106 together. The control method of the cleaning device 102 for executing the cleaning device according to the embodiment of the present application may be executed by a client installed thereon.
Taking the control method of the cleaning device in this embodiment performed by the cleaning device 102 as an example, fig. 2 is a schematic flow chart of an alternative control method of the cleaning device according to an embodiment of the present application, as shown in fig. 2, the flow of the method may include the following steps:
Step S202, voice control data of the target object are acquired, wherein the voice control data are used for controlling the cleaning equipment to clean the area of the position area where the target object is located.
The control method of the cleaning device in the embodiment can be applied to a scene of calling the cleaning device to clean the area of the position area where the target object is located. The target object may be used to identify users who have a need for regional cleaning. The location area where the target object is located may be an indoor area, for example, a user's home room, an office, a factory workshop, or the like, or may be an outdoor area, for example, a camping area. The cleaning device may be the sweeper, scrubber or the like. This is not limited in this embodiment.
In the related art, if a user needs to use a cleaning device to clean an area near the location of the user, the user needs to set an area to be cleaned on an area map of an application end bound to the cleaning device, and call the cleaning device to move to the set area to be cleaned to perform area cleaning. However, in many cases, the area to be cleaned is not a fixed planning area, and the target object is often required to plan the area to be cleaned in the area map of the application end, and the area planning process is complex, which consumes a long time, so that the use feeling of the cleaning device is poor. In addition, since the cleaning area is planned manually, there may be an error between the cleaning area planned on the application side and the area that the target object really wants to clean, resulting in poor flexibility of cleaning the area of the cleaning apparatus.
In order to solve at least some of the above problems, in the present embodiment, an instruction to call a cleaning task is issued by voice, sound source localization is performed based on voice data and environmental information, and the cleaning device is controlled to move to the sound source position based on the sound source localization result to perform area cleaning. Because the cleaning equipment is directly called through the voice to clean the area, the area planning is not required in advance, and meanwhile, the environment information of the cleaning equipment is combined when the sound source positioning is carried out, the influence of environment factors on the sound source positioning can be avoided, the accuracy of the sound source positioning is improved, and the purpose of reducing the complexity of the area planning process while the accuracy of the positioning of the area to be cleaned is improved can be realized.
The cleaning device can be provided with a voice acquisition component for voice acquisition, and the voice acquisition component can be used for voice acquisition. The voice acquisition component may be a component on the cleaning device for voice data acquisition, which may be a microphone, a microphone array, a sound acquisition card, or other sound receiving component. When the voice acquisition component is a microphone array, the microphone array can comprise multiple paths of microphones, each path of microphone can acquire voice data to obtain corresponding voice data, the installation positions of the different paths of microphones can be different, and the acquisition direction of the voice data can be different, so that the cleaning equipment can receive voice information in different directions, and sound source positioning is performed based on the acquired voice data.
In this embodiment, when the target object needs to perform area cleaning on the located location area by using the cleaning device, the cleaning device may be called by voice, and the cleaning device is controlled to move and perform area cleaning. The voice data acquisition can be carried out by the voice acquisition component on the cleaning equipment to obtain the voice control data. And under the condition that the voice control data is acquired, the calling cleaning function of the cleaning equipment can be triggered. Here, the voice control data may be used to control the cleaning apparatus to perform area cleaning on a location area where the target object is located.
Optionally, after the voice control data is collected, voice recognition may be performed on the voice control data to obtain a voice control instruction, where the voice control instruction may be used to control the cleaning device to perform area cleaning on the location area where the target object is located, that is, based on the voice control instruction recognized from the voice control data, the cleaning device is controlled to perform area cleaning on the location area where the target object is located. The voice control command may be a preset control command or a control command obtained by analyzing voice control data, and in this embodiment, the voice control data and the voice control command are not limited.
Step S204, positioning the target object according to the voice control data and the environmental information of the cleaning equipment to obtain the first orientation information of the target object.
From the acquired voice control data, a localization of the sound source may be performed to determine a position of the target object with respect to the cleaning device, where the position may refer to a direction, which may be an angle of the target object with respect to the cleaning device. Taking the voice acquisition component of the cleaning device as a microphone array for example, because the positions of the microphones are different, the time for receiving voice and the acquired voice data of the microphones are different, and according to the voice data acquired by the microphones, sound source positioning can be performed to obtain the azimuth information of the target object, wherein the azimuth information can be the angle information of the target object relative to the cleaning device, such as azimuth angle, pitch angle and the like.
For example, a plurality of microphones may be arranged along the circumferential side of the main body of the cleaning apparatus, each microphone being a microphone that collects voice in one path. Or a plurality of microphones may be disposed along the circumference of the LDS (i.e., lidar) cover of the cleaning device, each of which may serve as a microphone for collecting voice in one path. Of course, the above-described microphone layout is merely exemplary, and the layout of the microphones is not limited in this embodiment.
Since there are often many objects placed on the floor, such as a table and chair, an appliance, a trash can, etc., in a location area where the cleaning apparatus is located (e.g., an indoor area), the control audio emitted by the target object may be reflected to other locations while passing through the objects or corners. The voice control data received by the cleaning device may include reflected voice signals, and if the sound source localization is directly performed based on the obtained voice control data, the accuracy of the sound source localization may be affected due to the presence of interference data (for example, the reflected voice signals).
In this embodiment, the target object may be positioned according to the voice control data and the environmental information where the cleaning device is located, so as to obtain the first orientation information of the target object. Here, the first orientation information may be orientation information (e.g., angle information) of the target object with respect to the cleaning apparatus, and the orientation information may be similar to the foregoing, and will not be described herein.
Optionally, in the case of acquiring the voice control data of the target object, the environment information of the cleaning device may be combined, the possible interference data may be determined according to the environment information, the voice control data may be corrected based on the determined interference data, so as to eliminate the interference data in the voice control data, and then the sound source positioning may be performed according to the corrected voice control data.
Step S206, controlling the cleaning device to turn to the target object according to the first direction information.
After the first orientation information is determined, the cleaning device may be controlled to steer toward the target object in accordance with the first orientation information. Controlling the cleaning device to turn to the target object according to the first orientation information may be: and determining a rotation angle of the cleaning device required to rotate according to the first orientation information, and rotating the cleaning device according to the determined rotation angle to steer the cleaning device to the target object. Turning to the target object may refer to the image acquisition component of the cleaning device being aimed at the target object to ensure that the target object may be located within the acquisition area of the image acquisition component. Here, the image pickup part may be a monocular camera mounted on the cleaning apparatus. The rotation direction of the cleaning device may be a fixed rotation direction, such as a counterclockwise direction or a clockwise direction, or a rotation direction determined according to the magnitude of the rotation angle, i.e., a direction in which the rotation angle is smaller is selected as the rotation direction of the cleaning device.
Alternatively, since the cleaning device is rotated according to the first orientation information, the target object may not actually be located in the acquisition area of the image acquisition unit after the cleaning device is controlled to turn to the target object according to the first orientation information, since the orientation information may deviate from the actual orientation of the target object. Therefore, after the cleaning device is rotated, the image acquisition component of the cleaning device can be controlled to acquire images of the acquisition area of the cleaning device, and the acquired images can be used for identifying the target object so as to determine whether the cleaning device successfully turns to the target object. If the target object is not located within the acquisition region of the image acquisition component, the cleaning device may be rotated to successfully steer the cleaning device toward the target object.
Step S208, the cleaning device is controlled to move towards the target object, and the area cleaning is carried out on the position area where the target object is located.
In this embodiment, after the cleaning device is turned toward the target object, the cleaning device may be controlled to move toward the target object. The movement control mode of the cleaning device can be as follows: determining the relative position of the cleaning device and the target object (e.g., the distance between the two, the relative angle of the target object and the cleaning device) by the image acquisition component or other components described above; and controlling the cleaning device to move towards the target object according to the determined relative position. The cleaning device may be controlled to perform area cleaning on a location area where the target object is located during or after controlling the movement of the cleaning device to the target object. Here, the location area where the target object is located may be an area formed by taking the target object as a center and taking a certain length as a radius.
Alternatively, the movement of the cleaning device to the target object may be started after the cleaning device is determined to turn to the target object, and during the movement, the relative position between the cleaning device and the target object is calculated in real time, and the movement direction of the cleaning device is adjusted based on the relative position determined in real time, so as to ensure that the cleaning device always moves towards the target object.
Through the steps S202 to S208, voice control data of the target object is obtained, where the voice control data is used to control the cleaning device to clean the area of the position area where the target object is located; positioning a target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object; controlling the cleaning device to turn to the target object according to the first orientation information; the cleaning equipment is controlled to move towards the target object, and the area cleaning is carried out on the position area where the target object is located, so that the problem of complex area planning process is solved, and the flexibility of area cleaning is improved.
In an exemplary embodiment, positioning a target object according to voice control data and environmental information of the cleaning device to obtain first location information of the target object includes:
S11, determining voice interference information corresponding to voice control data according to the environmental information of the cleaning equipment;
S12, correcting the voice control data by using voice interference information to obtain corrected voice control data;
s13, extracting voice characteristics of the corrected voice control data to obtain target voice characteristics;
S14, sound source positioning is carried out according to the target voice characteristics, and first direction information is obtained.
In this embodiment, according to the environmental information in which the cleaning device is located, the voice interference information that may be generated in the corresponding environment by the voice signal of the target object may be calculated, so that the voice interference information corresponding to the voice control data may be determined. For the determined speech disturbance information, the speech control data may be corrected using the determined speech disturbance information, resulting in corrected speech control data. The correction procedure here may be: and eliminating voice interference information in the voice control data.
In this embodiment, the manner of performing sound source localization according to the corrected voice control data may be to perform voice feature extraction on the corrected voice control data to obtain the target voice feature; and performing sound source positioning according to the obtained target voice characteristics to obtain first direction information. Here, the extracted voice features may include voice feature information such as power, time difference between receiving voice data by different microphones, and other voice features for sound source localization, which is not limited in this embodiment.
Alternatively, a sound source localization algorithm based on a microphone array may be used for sound source localization using corrected voice control data, for example, a localization method based on TDOA (TIME DIFFERENCE of Arrival delay difference) estimation, and other sound source localization methods may be used. The obtained first azimuth information may be angle information of the target object, which may be used to indicate an angle of the target object with respect to the cleaning device, and may further include other azimuth information, for example, a relative distance between the target object and the cleaning device, which is not limited in this embodiment.
According to the embodiment, the voice control data are corrected by combining the environment information, and then the sound source positioning is performed according to the voice characteristics of the corrected voice control data, so that the influence of the interference information in the environment on the sound source positioning can be avoided, and the accuracy of the sound source positioning is improved.
In one exemplary embodiment, correcting speech control data using speech disturbance information to obtain corrected speech control data includes:
S21, performing echo cancellation operation on the voice control data by using the echo information corresponding to the voice control data to obtain corrected voice control data, wherein the voice interference information comprises the echo information corresponding to the voice control data.
Because the environment interferes with the voice control data acquired by the cleaning device, the voice control data can be subjected to echo cancellation operation on the voice control data mainly due to echoes of objects around the cleaning device on voice signals. The voice disturbance information may include echo information corresponding to voice control data. The voice control data is corrected by using voice disturbance information in the following manner: an echo cancellation operation is performed on the voice control data using echo information corresponding to the voice control data.
For example, after the voice control data of the target object is acquired, the echo direction of the sound source may be determined according to the environmental information in which the cleaning device is located, and the audio of the echo direction may be estimated. Under the condition that the audio of the echo direction is determined, corresponding audio in the audio information received by the microphone can be eliminated, and sound source positioning is carried out according to the remaining audio information.
According to the embodiment, the echo information is determined based on the environment information, and the correction of the voice control data is performed by eliminating the echo information of the voice control data, so that the accuracy of sound source positioning can be improved.
In one exemplary embodiment, determining speech disturbance information corresponding to speech control data based on environmental information in which a cleaning device is located includes:
S31, determining voice interference information corresponding to the voice control data according to the barrier information in the position area of the cleaning equipment and the wall information in the position area of the cleaning equipment, wherein the environment information of the cleaning equipment comprises the barrier information and the wall information.
In this embodiment, the environmental information in which the cleaning apparatus is located may include obstacle information and wall information in an area where the cleaning apparatus is located. Here, the obstacle may be an object placed on the ground or at a small distance from the ground and having a certain height. The obstacle information may be information such as a position and a size of an obstacle; the wall information may be information such as the distance of the wall (e.g., corner) from the cleaning device.
According to the obstacle information in the area where the cleaning device is located and the wall information in the area where the cleaning device is located, voice interference information corresponding to the voice control data can be determined. The determined voice disturbance information may include echo information caused by reflected voice signals of obstacles, walls, etc., that is, echo information corresponding to voice control data.
According to the embodiment, the voice interference information is determined according to the obstacle information and the wall information of the area where the cleaning equipment is located, so that the accuracy and the efficiency of determining the voice interference information can be improved, and the accuracy of sound source positioning is improved.
In one exemplary embodiment, the voice acquisition component may be a microphone array, the voice control data may include multiple voice data acquired by multiple microphones in the microphone array, the multiple microphones may be all or part of the microphones in the microphone array, and the multiple microphones are in one-to-one correspondence with the multiple voice data. The corrected voice control data includes multiple voice data corresponding to the multiple microphones one by one, where the multiple voice data may be corrected voice data instead of multiple voice data directly collected by the multiple microphones.
Correspondingly, extracting the voice characteristic of the corrected voice control data to obtain the target voice characteristic, including:
S41, respectively extracting voice characteristics of each path of voice data in the plurality of paths of voice data to obtain voice characteristics corresponding to each path of voice data;
S42, according to the receiving time of each path of voice data, determining the receiving time difference of any two paths of voice data in the multipath voice data, wherein the target voice features comprise the voice features corresponding to each path of voice data and the receiving time difference of any two paths of voice data.
When the voice feature extraction is performed, the voice feature extraction can be performed on each path of voice data in the plurality of paths of voice data respectively, so as to obtain the voice feature corresponding to each path of voice data. Here, the voice feature corresponding to each path of voice data may include voice feature information such as power of each path of voice data. The target speech features may include speech features corresponding to each path of speech data.
According to the receiving time of each path of voice data, the receiving time difference of any two paths of voice data in the multipath voice data can be determined. The target voice feature may also include a time difference between voice data collected by different microphones, i.e., a time difference between receipt of any two paths of voice data. Here, since it is not necessarily required to use voice data collected by all the microphone arrays for sound source localization, any two microphones may be selected from the microphone arrays, and the voice data collected by the two microphones may be used for sound source localization. That is, the plurality of voice data are voice data corresponding to the two selected microphones.
For example, when the host of the sweeper receives an instruction of calling a sweeping task issued by voice, calling a sound source of a person according to the time difference, power and other sound characteristic information of the voice information received by different microphones on the microphone array, and sending environment information (including corners, barriers and the like) of the host to a sound source positioning algorithm to correct the sound source again.
According to the embodiment, the accuracy of sound source positioning can be improved by extracting the voice characteristics of voice data received by the multi-path microphones and calculating the time difference of the voice data acquired by the different paths of microphones as the voice characteristics used for sound source positioning.
In an exemplary embodiment, before controlling the movement of the cleaning device towards the target object, the method further comprises:
s51, performing image acquisition through an image acquisition component on the cleaning equipment to obtain a first acquisition image, and performing point cloud data acquisition through a point cloud acquisition component to obtain object point cloud data of a target object;
s52, carrying out object recognition on the first acquired image to obtain second azimuth information of the target object;
s53, object identification is carried out on the object point cloud data, and reference azimuth information of the target object is obtained;
and S54, correcting the second azimuth information by using the reference azimuth information to obtain corrected second azimuth information, wherein the corrected second azimuth information is used for controlling the cleaning device to move towards the target object.
In this embodiment, in order to improve the efficiency of the movement of the cleaning device to the target object, before the movement of the cleaning device to the target object is controlled, the target object may be positioned based on the acquired image by the image acquisition unit on the cleaning device, so as to determine the azimuth information such as the direction and the distance of the target object relative to the cleaning device. The image capturing unit is similar to the foregoing embodiments, and the description of this embodiment is omitted here.
After controlling the cleaning device to turn to the target object, image acquisition can be performed by an image acquisition component on the cleaning device, resulting in a first acquired image. Here, the first captured image may be an image obtained by the image capturing section capturing an image of the captured region, and may include the target object or may include other objects than the target object, for example, other humanoid objects, non-humanoid objects, and the like.
For the acquired first acquired image, object recognition, which may be recognition according to AI (ARTIFICIAL INTELLIGENCE ) image technology, may be performed to determine second bearing information of the target object. The second positional information may include a direction and distance of the target object relative to the cleaning device, and the determination of the second positional information may be calculated based on a monocular algorithm. Object recognition is carried out on the first acquired image, the problems that the first orientation information is calculated incorrectly, the target object moves in position, the cleaning device has errors according to the rotation angle of the first orientation information and the like can be avoided, the cleaning device does not completely turn to the target object, and meanwhile, the cleaning device can be conveniently controlled to move to the target object.
Optionally, in order to improve the accuracy of positioning the target object, the point cloud acquisition component may perform point cloud data acquisition to obtain object point cloud data of the target object; and correcting the second azimuth information based on the object point cloud data to obtain corrected second azimuth information. For example, object recognition can be performed on the collected object point cloud data to obtain reference azimuth information of the target object; and correcting the second azimuth information by using the reference azimuth information to obtain corrected second azimuth information.
The object identification can be realized by searching the collected object point cloud data by using preset point cloud data of the target object. The reference position information may be information of a direction and a distance of the target object with respect to the cleaning device, which is determined by the object point cloud data. The point cloud collecting component may be an LDS (LASER DISTANCE Sensor) Sensor or other types of sensors, which is not limited in this embodiment.
According to the method and the device for cleaning equipment movement control, object identification is carried out according to the collected images, azimuth information of the target object is determined, and the determined azimuth information is corrected by combining point cloud data, so that accuracy of object positioning can be improved, and efficiency of cleaning equipment movement control is improved.
In one exemplary embodiment, controlling the cleaning device to steer toward the target object according to the first orientation information includes:
s61, determining a target angle to be rotated by the cleaning equipment according to the first orientation information, wherein the target angle is a relative angle between a target object and the cleaning equipment;
s62, controlling the cleaning device to rotate by a target angle so as to turn the cleaning device to a target object.
In this embodiment, controlling the cleaning apparatus to turn toward the target object may be achieved by controlling the cleaning apparatus to rotate. After the first orientation information is determined, a target angle at which the cleaning device is to be rotated may be determined. Here, the target angle may be a relative angle of the target object and the cleaning device, and may include a direction and an angle size in which the cleaning device is to be rotated, that is, a rotation direction and a rotation angle, and the rotation direction may be a fixed direction or a direction determined based on the first direction information.
Based on the determined target angle, the cleaning device may be controlled to rotate by the target angle to steer the cleaning device toward the target object. For example, the host may be rotated to adjust the host camera angle based on the corrected sound source angle information. Compared with the mode that the host computer rotates by 360 degrees when the host computer acquires the calling task, the control of the host computer rotation based on the sound source angle information can show higher intelligence.
According to the embodiment, after the angle to be rotated of the cleaning device is determined according to the azimuth information of the positioned sound source, the determined angle is controlled to be rotated, and the accuracy of the steering of the cleaning device to the target object can be improved.
In one exemplary embodiment, determining a target angle at which the cleaning device is to be rotated based on the first orientation information comprises:
s71, determining a first rotation angle corresponding to the clockwise direction according to the first direction information, wherein the first rotation angle is an angle required by the cleaning equipment to rotate along the clockwise direction to turn the target object;
s72, determining a second rotation angle corresponding to the anticlockwise direction according to the first direction information, wherein the second rotation angle is an angle required by the cleaning equipment to rotate along the anticlockwise direction to turn the target object;
And S73, determining the smaller angle of the first rotation angle and the second rotation angle as a target angle.
In order to reduce the angle of rotation required by the cleaning device and improve the efficiency of rotation control of the cleaning device, the rotation direction of the cleaning device when the cleaning device turns to the target object can be determined according to the first direction information, and the rotation direction can be clockwise or anticlockwise. The rotation direction may be determined based on the magnitude of the angle of rotation required to steer the target object.
In the present embodiment, according to the first direction information, a first rotation angle corresponding to the clockwise direction and a second rotation angle corresponding to the counterclockwise direction can be determined, respectively. Here, the first rotation angle may be an angle of rotation required for the cleaning apparatus to turn the target object in the clockwise direction, and the second rotation angle may be an angle of rotation required for the cleaning apparatus to turn the target object in the counterclockwise direction.
After the first rotation angle and the second rotation angle are determined, the smaller rotation angle of the first rotation angle and the second rotation angle can be determined as the target angle, and the corresponding rotation direction is the rotation direction of the cleaning device. For example, if the angle of rotation required for the host to turn in the clockwise direction is 120 degrees and the angle of rotation required for the host to turn in the counterclockwise direction is 240 degrees, the clockwise direction may be determined as the direction of rotation of the host and 120 degrees may be determined as the angle of rotation of the host.
According to the method and the device, the rotation direction of the cleaning device is determined based on the angle of rotation required by the cleaning device to turn to the target object along the clockwise direction and the anticlockwise direction, so that the angle of rotation required by the cleaning device can be reduced, and the efficiency of rotation control of the cleaning device is improved.
In one exemplary embodiment, after controlling the cleaning apparatus to rotate by the target angle, the method further includes:
s81, performing image acquisition through an image acquisition component on the cleaning equipment to obtain a second acquired image;
S82, in the case where the target object is not recognized from the second captured image, controlling the cleaning device to continuously rotate in the preset direction until the target object is recognized from the image captured by the image capturing section.
In this embodiment, after the cleaning device is controlled to rotate by the target angle, when the target object needs to be positioned or whether the target object is turned or not needs to be checked, the image acquisition can be performed by the image acquisition component on the cleaning device, so as to obtain a second acquired image. Here, the image capturing section is similar to that of the previous embodiment, and the second captured image is similar to that of the first captured image of the previous embodiment, and a description thereof will be omitted.
Because the second collected image is collected after the cleaning device rotates by the target angle, there is a situation that the cleaning device fails to rotate or the cleaning device does not successfully turn to the target object after rotating by the target angle, the target object may or may not exist in the second collected image, and the existence of the target object in the collected image refers to the image containing the target object in the collected image.
After the second captured image is acquired, object recognition may be performed on the second captured image in a manner similar to that of the first captured image in the foregoing embodiment. In the case that the target object is recognized, the orientation information of the target object may be determined in a similar manner to the foregoing, and the cleaning device may be controlled to move toward the target object. Has been described and will not be described in detail herein.
In the case where the target object is not recognized from the second captured image, the cleaning device may be controlled to continue rotating in the preset direction until the target object is recognized from the image captured by the image capturing section. The preset direction here may be the rotation direction of the aforementioned target angle, i.e., clockwise or counterclockwise. In the rotating process, the image acquisition component can be controlled to continuously acquire images, and object identification is carried out on the acquired images until a target object is identified.
Optionally, after the cleaning device is controlled to rotate for 360 degrees along the preset direction, if the target object is not yet identified, the cleaning device can be controlled to stop rotating, the cleaning task is called out, and prompt information is sent out through the cleaning device to prompt that the target object is not identified, the target object can be prompted to send out voice control data again, or the target object is prompted to call the cleaning device in other modes to execute the cleaning task.
According to the embodiment, under the condition that the target object is not identified after the cleaning device turns to the target object, the cleaning device is controlled to continuously rotate until the target object is identified, so that the success rate of calling the cleaning device can be improved.
In an exemplary embodiment, after the image acquisition by the image acquisition component on the cleaning device, the method further comprises:
S91, performing human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, wherein the human-shaped object detection result is used for indicating the human-shaped object identified from the second acquired image;
S92, determining that the target object is identified from the second acquired image when the human-shaped object detection result indicates that the human-shaped object is identified from the second acquired image;
s93, in a case where the human-shaped object detection result indicates that the human-shaped object is not recognized from the second captured image, it is determined that the target object is not recognized from the second captured image.
In this embodiment, the object recognition on the second acquired image may be: and detecting the humanoid objects in the second acquired image. After the second acquired image is obtained, human-shaped object detection can be performed on the second acquired image, and a human-shaped object detection result is obtained. Here, the humanoid includes the rough outline of the trunk and limbs of the human body, the object characteristics of the humanoid object may be preset, and the humanoid object detection is performed on the second acquired image based on the preset object characteristics of the humanoid object. The humanoid subject detection result may be used to indicate whether a humanoid subject is identified from the second acquired image, and may also be used to identify subject information of the humanoid subject, such as subject orientation, etc., from at least the second acquired image.
In the case where the human-shaped object detection result indicates that the human-shaped object is identified from the second captured image, it may be determined that the target object is identified from the second captured image. In this case, if the identified humanoid object is only one, the humanoid object may be determined as the target object. In the case where the human-shaped object detection result indicates that the human-shaped object is not recognized from the second captured image, it may be determined that the target object is not recognized from the second captured image, and the cleaning apparatus may perform the aforementioned operation of continuous rotation so as to find the target object.
According to the embodiment, the human-shaped object recognition is carried out on the acquired image to determine whether the target object is recognized or not, so that the efficiency of recognizing the target object can be improved.
In an exemplary embodiment, after performing human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, the method further includes:
S101, determining an object position corresponding to each of the plurality of human-shaped objects in the case that the human-shaped object detection result indicates that the plurality of human-shaped objects are identified from the second acquired image;
s102, determining a humanoid object with the object azimuth matched with the azimuth information of the target object from the plurality of humanoid objects as the target object.
If the human-shaped object detection result indicates that a plurality of human-shaped objects are identified from the second acquired image, and at the moment, the plurality of human-shaped objects exist in the second acquired image, an object position corresponding to each of the plurality of human-shaped objects can be firstly determined; then, according to the object position corresponding to each human-shaped object and the azimuth information of the target object determined according to the voice characteristic information, selecting an object with the object azimuth closest to the target object as the target object, namely, determining the human-shaped object with the object azimuth matched with the azimuth information of the target object in the human-shaped objects as the target object.
Here, the object position corresponding to each human-shaped object may be determined according to the aforementioned monocular algorithm or may be determined according to the aforementioned object point cloud data. The humanoid object matching the azimuth information of the target object may be the humanoid object closest to the azimuth information of the target object.
For example, after the host adjusts the camera angle of the host according to the corrected sound source angle information, the host judges whether human shape information exists or not through an AI image technology, if the human shape is detected, the position of the calling person is calculated by using a monocular algorithm and an LDS point cloud technology, if a plurality of persons exist, a person closer to the sound source direction is selected as a recognition target.
By the embodiment, when a plurality of humanoid objects are identified, the humanoid object closest to the sound source direction is taken as the target object, so that the accuracy of determining the target object can be improved.
In one exemplary embodiment, after controlling the movement of the cleaning device to the target object, the method further includes:
S111, in the case where it is detected that the distance between the cleaning apparatus and the target object is less than or equal to the preset distance threshold, controlling the cleaning apparatus to stop moving toward the target object, and controlling the cleaning apparatus to perform the cleaning operation until the object to be cleaned is not detected within the detection range of the target detection part of the cleaning apparatus.
Since the target object is generally a region, not a location point, where the cleaning device is required to clean, the region to be cleaned has a certain region range. In order to prevent the cleaning apparatus from hitting the target object during movement while ensuring that the cleaning apparatus is capable of cleaning an area, a distance threshold (i.e., a preset distance threshold) may be preset to control the cleaning apparatus to stop moving toward the target object.
In the present embodiment, after controlling the movement of the cleaning device toward the target object, the distance between the cleaning device and the target object may be detected in real time. If the distance between the cleaning device and the target object is detected to be less than or equal to the preset distance threshold, the cleaning device may be controlled to stop moving toward the target object.
Alternatively, in order to improve the cleaning efficiency of the cleaning apparatus, the cleaning apparatus may be controlled to perform area cleaning after the cleaning apparatus is controlled to stop moving toward the target object, and whether or not the object to be cleaned, for example, waste such as paper dust, exists in the detection range thereof may be detected by the target detection means thereon, and at the same time, the object position of the detected object to be cleaned may be recorded. The object detection means may be the aforementioned point cloud acquisition means, image acquisition means, or other detection means as long as it can be used for object detection.
In the process of cleaning the area, whether the object to be cleaned is detected in the detection range of the target detection part or not can be determined, if the object to be cleaned is detected, the position of the recorded object and the cleaning operation of the object to be cleaned can be continuously executed, and meanwhile, the position of the recorded object can be deleted for the cleaned object to be cleaned; if the object to be cleaned is not detected and there is no recorded object to be cleaned either, the cleaning apparatus may be controlled to stop performing the cleaning operation.
According to the embodiment, the cleaning device is controlled to move to a certain distance from the target object, and the cleaning device is controlled to execute the cleaning operation until the object to be cleaned cannot be detected, so that the cleaning efficiency of the cleaning device on the ground can be improved, and the control flexibility of the cleaning device is improved.
A control method of the cleaning apparatus in the embodiment of the present application will be explained with reference to alternative examples. In this alternative example, the cleaning device is a sweeper, the voice acquisition component is a microphone array, the image acquisition component is a monocular camera, and the point cloud acquisition component is an LDS sensor.
The alternative example provides a scheme for calling and cleaning a voice control machine, the scheme for calling and cleaning the floor sweeping machine is mainly based on an image recognition technology and assisted by a sound source positioning technology, and in a multi-person scene, the calling and cleaning can accurately judge the position of a person issuing a calling task. As shown in fig. 3, the flow of the control method of the cleaning apparatus in this alternative example may include the steps of:
Step S302, the environment information (such as a corner, an obstacle and the like) of the position of the sweeper is sent to a sound source positioning algorithm, the acquired voice information is corrected, and calling voice source positioning is carried out according to the time difference, the power and other sound characteristic information of the voice information received by different microphones on the microphone array.
Step S304, adjusting the camera angle of the sweeper according to the corrected sound source angle information.
And step S306, performing human shape detection through an AI image technology, judging whether human shape information exists in the current image, and measuring and calculating the position of the calling person by using a monocular algorithm and an LDS point cloud technology when the human shape information is detected.
In step S308, if a plurality of persons are detected, a person closer to the sound source direction is selected as the recognition target.
And step S310, controlling the sweeper to go to the position of the calling person to carry out the cleaning task according to the position of the calling person.
Step S312, when no human shape information is detected in the image obtained after the host is adjusted according to the sound source angle, the sweeper is controlled to perform rotation detection of other angles.
Step S314, in the rotation detection process, if the human shape is detected, measuring and calculating the position of the calling person, and controlling the sweeper to go to the position of the calling person according to the measured position to perform the cleaning task.
Step S316, when the humanoid form is not detected after 360 degrees of rotation adjustment, the calling cleaning task is ended.
Through the optional example, the accuracy of calling cleaning can be improved, and meanwhile, the intelligence of subjective experience of a calling task can also be improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM (Read-Only Memory)/RAM (Random Access Memory), magnetic disk, optical disk) and including instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
According to another aspect of the embodiment of the present application, there is also provided a control device of a cleaning apparatus for implementing the control method of the cleaning apparatus. Fig. 4 is a block diagram of a control device of an alternative cleaning apparatus according to an embodiment of the present application, and as shown in fig. 4, the device may include:
an acquiring unit 402, configured to acquire voice control data of a target object, where the voice control data is used to control a cleaning device to perform area cleaning on a location area where the target object is located;
the positioning unit 404 is connected to the obtaining unit 402, and is configured to position the target object according to the voice control data and the environmental information where the cleaning device is located, so as to obtain first orientation information of the target object;
A first control unit 406, coupled to the positioning unit 404, for controlling the cleaning apparatus to turn to the target object according to the first orientation information;
The first execution unit 408 is connected to the first control unit 406, and is configured to control the cleaning device to move toward the target object, and perform area cleaning on a location area where the target object is located.
It should be noted that, the acquiring unit 402 in this embodiment may be used to perform the step S202, the positioning unit 404 in this embodiment may be used to perform the step S204, the first control unit 406 in this embodiment may be used to perform the step S206, and the first performing unit 408 in this embodiment may be used to perform the step S208.
The voice control data of the target object are obtained through the module, wherein the voice control data are used for controlling the cleaning equipment to clean the area of the position area where the target object is located; positioning a target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object; controlling the cleaning device to turn to the target object according to the first orientation information; the cleaning equipment is controlled to move towards the target object, and the area cleaning is carried out on the position area where the target object is located, so that the flexibility of area cleaning can be improved.
In one exemplary embodiment, the positioning unit includes:
the first determining module is used for determining voice interference information corresponding to the voice control data according to the environmental information of the cleaning equipment;
The correction module is used for correcting the voice control data by using the voice interference information to obtain corrected voice control data;
the extraction module is used for extracting the voice characteristics of the corrected voice control data to obtain target voice characteristics;
And the positioning module is used for positioning the sound source according to the target voice characteristics to obtain first direction information.
In one exemplary embodiment, the correction module includes:
and the execution sub-module is used for executing echo cancellation operation on the voice control data by using the echo information corresponding to the voice control data to obtain corrected voice control data, wherein the voice interference information comprises the echo information corresponding to the voice control data.
In one exemplary embodiment, the first determination module includes:
The first determining submodule is used for determining voice interference information corresponding to voice control data according to barrier information in a position area of the cleaning equipment and wall information in the position area of the cleaning equipment, wherein the environment information of the cleaning equipment comprises the barrier information and the wall information.
In one exemplary embodiment, the corrected voice control data includes multiple paths of voice data in one-to-one correspondence with multiple paths of microphones in the microphone array of the cleaning device; the extraction module comprises:
the extraction sub-module is used for extracting voice characteristics of each path of voice data in the multipath voice data respectively to obtain voice characteristics corresponding to each path of voice data;
and the second determining submodule is used for determining the receiving time difference of any two paths of voice data in the multipath voice data according to the receiving time of each path of voice data, wherein the target voice characteristic comprises the voice characteristic corresponding to each path of voice data and the receiving time difference of any two paths of voice data.
In an exemplary embodiment, the above apparatus further includes:
The first acquisition unit is used for acquiring images through an image acquisition component on the cleaning equipment before the cleaning equipment is controlled to move towards the target object to obtain a first acquisition image, and acquiring point cloud data through a point cloud acquisition component to obtain object point cloud data of the target object;
the first identification unit is used for carrying out object identification on the first acquired image to obtain second azimuth information of the target object;
the second identification unit is used for carrying out object identification on the object point cloud data to obtain reference azimuth information of the target object;
And the correction unit is used for correcting the second azimuth information by using the reference azimuth information to obtain corrected second azimuth information, wherein the corrected second azimuth information is used for controlling the cleaning device to move towards the target object.
In one exemplary embodiment, the first execution unit includes:
The second determining module is used for determining a target angle to be rotated by the cleaning equipment according to the first direction information, wherein the target angle is a relative angle between the target object and the cleaning equipment;
and the control module is used for controlling the cleaning device to rotate by a target angle so as to turn the cleaning device to a target object.
In one exemplary embodiment, the second determining module includes:
a third determining sub-module for determining a first rotation angle corresponding to the clockwise direction according to the first direction information, wherein the first rotation angle is an angle at which the cleaning device is required to rotate in the clockwise direction to turn the target object;
A fourth determining sub-module for determining a second rotation angle corresponding to the counterclockwise direction according to the first direction information, wherein the second rotation angle is an angle at which the cleaning apparatus is required to rotate in the counterclockwise direction to turn the target object;
And a fifth determining sub-module for determining the smaller angle of the first rotation angle and the second rotation angle as the target angle.
In an exemplary embodiment, the above apparatus further includes:
The second acquisition unit is used for acquiring images through an image acquisition component on the cleaning equipment after controlling the cleaning equipment to rotate a target angle to obtain a second acquisition image;
and a second control unit for controlling the cleaning device to continuously rotate along a preset direction until the target object is identified in the image acquired from the image acquisition part in the case that the target object is not identified in the second acquired image.
In an exemplary embodiment, the above apparatus further includes:
The detection unit is used for carrying out image acquisition through the image acquisition component on the cleaning equipment to obtain a second acquired image, then carrying out human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, wherein the human-shaped object detection result is used for indicating the human-shaped object identified from the second acquired image;
A first determining unit configured to determine that the target object is identified from the second captured image, in a case where the human-shaped object detection result indicates that the human-shaped object is identified from the second captured image;
And a second determining unit configured to determine that the target object is not recognized from the second captured image, in a case where the human-shaped object detection result indicates that the human-shaped object is not recognized from the second captured image.
In an exemplary embodiment, the above apparatus further includes:
a third determining unit, configured to determine, after performing human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, an object position corresponding to each of the plurality of human-shaped objects if the human-shaped object detection result indicates that the plurality of human-shaped objects are identified from the second acquired image;
And a fourth determining unit configured to determine, as the target object, a humanoid object, of the plurality of humanoid objects, whose object orientation matches orientation information of the target object.
In an exemplary embodiment, the above apparatus further includes:
And a third control unit for controlling the cleaning device to stop moving toward the target object and controlling the cleaning device to perform a cleaning operation until an object to be cleaned, which is not detected within a detection range of the target detection part of the cleaning device, is detected in a case where a distance between the cleaning device and the target object is detected to be less than or equal to a preset distance threshold value after controlling the cleaning device to move toward the target object.
It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to yet another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in this embodiment, the above-described storage medium may be used to execute the program code of the control method of the cleaning apparatus of any one of the above-described embodiments of the present application.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
S1, acquiring voice control data of a target object, wherein the voice control data are used for controlling cleaning equipment to clean a position area where the target object is located;
S2, positioning a target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object;
s3, controlling the cleaning equipment to turn to the target object according to the first direction information;
And S4, controlling the cleaning equipment to move towards the target object, and cleaning the area of the position area where the target object is located.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the control method of the cleaning apparatus described above, which may be a server, a terminal, or a combination thereof.
Fig. 5 is a block diagram of an alternative electronic device, according to an embodiment of the present application, including a processor 502, a communication interface 504, a memory 506, and a communication bus 508, as shown in fig. 5, wherein the processor 502, the communication interface 504, and the memory 506 communicate with each other via the communication bus 508, wherein,
A memory 506 for storing a computer program;
the processor 502 is configured to execute the computer program stored in the memory 506, and implement the following steps:
S1, acquiring voice control data of a target object, wherein the voice control data are used for controlling cleaning equipment to clean a position area where the target object is located;
S2, positioning a target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object;
s3, controlling the cleaning equipment to turn to the target object according to the first direction information;
And S4, controlling the cleaning equipment to move towards the target object, and cleaning the area of the position area where the target object is located.
Alternatively, in the present embodiment, the communication bus may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM or nonvolatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
As an example, the memory 506 may include, but is not limited to, the acquisition unit 402, the positioning unit 404, the first control unit 406, and the first execution unit 408 in a control device including the cleaning apparatus. In addition, other module units in the control device of the cleaning apparatus may be included, but are not limited to, and are not described in detail in this example.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but may also be a DSP (DIGITAL SIGNAL Processing), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field-Programmable gate array) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is only illustrative, and the device implementing the control method of the cleaning device may be a terminal device, and the terminal device may be a smart phone (such as an Android Mobile phone, an iOS Mobile phone, etc.), a tablet computer, a palm computer, a Mobile internet device (Mobile INTERNET DEVICES, MID), a PAD, etc. Fig. 5 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (15)

1. A control method of a cleaning apparatus, comprising:
Acquiring voice control data of a target object, wherein the voice control data are used for controlling the cleaning equipment to clean a position area where the target object is located;
positioning the target object according to the voice control data and the environmental information of the cleaning equipment to obtain first orientation information of the target object;
controlling the cleaning equipment to turn to the target object according to the first orientation information;
and controlling the cleaning equipment to move towards the target object, and cleaning the area of the position area where the target object is located.
2. The method of claim 1, wherein locating the target object based on the voice control data and the environmental information in which the cleaning device is located, to obtain the first location information of the target object, comprises:
determining voice interference information corresponding to the voice control data according to the environmental information of the cleaning equipment;
Correcting the voice control data by using the voice interference information to obtain corrected voice control data;
extracting voice characteristics of the corrected voice control data to obtain target voice characteristics;
and performing sound source positioning according to the target voice characteristics to obtain the first direction information.
3. The method of claim 2, wherein correcting the voice control data using the voice disturbance information to obtain corrected voice control data comprises:
And performing echo cancellation operation on the voice control data by using echo information corresponding to the voice control data to obtain corrected voice control data, wherein the voice interference information comprises the echo information corresponding to the voice control data.
4. A method according to claim 3, wherein said determining speech disturbance information corresponding to said speech control data based on environmental information in which said cleaning device is located comprises:
And determining voice interference information corresponding to the voice control data according to the barrier information in the position area of the cleaning equipment and the wall information in the position area of the cleaning equipment, wherein the environment information of the cleaning equipment comprises the barrier information and the wall information.
5. The method of claim 2, wherein the corrected voice control data comprises multiple paths of voice data in one-to-one correspondence with multiple paths of microphones in a microphone array of the cleaning device; and extracting the voice characteristic of the corrected voice control data to obtain a target voice characteristic, wherein the voice characteristic comprises the following steps:
Respectively extracting voice characteristics of each path of voice data in the multipath voice data to obtain voice characteristics corresponding to each path of voice data;
And determining the receiving time difference of any two paths of voice data in the multipath voice data according to the receiving time of each path of voice data, wherein the target voice characteristic comprises the voice characteristic corresponding to each path of voice data and the receiving time difference of any two paths of voice data.
6. The method of claim 1, wherein prior to said controlling movement of the cleaning device toward the target object, the method further comprises:
Acquiring an image through an image acquisition component on the cleaning equipment to obtain a first acquired image, and acquiring point cloud data through a point cloud acquisition component to obtain object point cloud data of the target object;
Performing object recognition on the first acquired image to obtain second azimuth information of the target object;
performing object recognition on the object point cloud data to obtain reference azimuth information of the target object;
And correcting the second azimuth information by using the reference azimuth information to obtain corrected second azimuth information, wherein the corrected second azimuth information is used for controlling the cleaning equipment to move towards the target object.
7. The method of claim 1, wherein said controlling the cleaning device to steer toward the target object in accordance with the first orientation information comprises:
Determining a target angle to be rotated by the cleaning equipment according to the first orientation information, wherein the target angle is a relative angle between the target object and the cleaning equipment;
And controlling the cleaning device to rotate the target angle so as to turn the cleaning device to the target object.
8. The method of claim 7, wherein determining the target angle at which the cleaning device is to be rotated based on the first orientation information comprises:
Determining a first rotation angle corresponding to a clockwise direction according to the first direction information, wherein the first rotation angle is an angle at which the cleaning equipment needs to rotate along the clockwise direction to turn the target object;
Determining a second rotation angle corresponding to the anticlockwise direction according to the first orientation information, wherein the second rotation angle is an angle at which the cleaning device is required to rotate along the anticlockwise direction to the target object;
and determining the smaller angle of the first rotation angle and the second rotation angle as the target angle.
9. The method of claim 7, wherein after said controlling the cleaning device to rotate the target angle, the method further comprises:
acquiring an image through an image acquisition component on the cleaning equipment to obtain a second acquired image;
and in the case that the target object is not identified from the second acquired image, controlling the cleaning device to continuously rotate along a preset direction until the target object is identified from the image acquired by the image acquisition component.
10. The method of claim 9, wherein after the image acquisition by the image acquisition component on the cleaning device results in a second acquired image, the method further comprises:
performing human-shaped object detection on the second acquired image to obtain a human-shaped object detection result, wherein the human-shaped object detection result is used for indicating the human-shaped object identified from the second acquired image;
Determining that the target object is identified from the second acquired image in the case that the human-shaped object detection result indicates that the human-shaped object is identified from the second acquired image;
in the case that the human-shaped object detection result indicates that the human-shaped object is not recognized from the second acquired image, it is determined that the target object is not recognized from the second acquired image.
11. The method according to claim 10, wherein after the performing the human-shaped object detection on the second acquired image, the method further comprises:
Determining an object position corresponding to each of the plurality of human-shaped objects in a case where the human-shaped object detection result indicates that the plurality of human-shaped objects are identified from the second acquired image;
And determining the humanoid object with the object azimuth matched with the azimuth information of the target object as the target object in the plurality of humanoid objects.
12. The method according to any one of claims 1 to 11, wherein after said controlling the movement of the cleaning device to the target object, the method further comprises:
And controlling the cleaning device to stop moving towards the target object and controlling the cleaning device to execute cleaning operation until the object to be cleaned is not detected within the detection range of the target detection component of the cleaning device under the condition that the distance between the cleaning device and the target object is detected to be less than or equal to a preset distance threshold.
13. A control device of a cleaning apparatus, comprising:
an acquisition unit configured to acquire voice control data of a target object, where the voice control data is configured to control the cleaning device to perform area cleaning on a location area where the target object is located;
The positioning unit is used for positioning the target object according to the voice control data and the environment information of the cleaning equipment to obtain first orientation information of the target object;
a first control unit for controlling the cleaning device to turn to the target object according to the first direction information;
and the first execution unit is used for controlling the cleaning equipment to move towards the target object and cleaning the area of the position area where the target object is located.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 12.
15. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to perform the method of any of claims 1 to 12 by means of the computer program.
CN202211277621.5A 2022-10-18 2022-10-18 Control method and device of cleaning equipment, storage medium and electronic device Pending CN117941991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211277621.5A CN117941991A (en) 2022-10-18 2022-10-18 Control method and device of cleaning equipment, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211277621.5A CN117941991A (en) 2022-10-18 2022-10-18 Control method and device of cleaning equipment, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117941991A true CN117941991A (en) 2024-04-30

Family

ID=90793198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211277621.5A Pending CN117941991A (en) 2022-10-18 2022-10-18 Control method and device of cleaning equipment, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117941991A (en)

Similar Documents

Publication Publication Date Title
CN109506568B (en) Sound source positioning method and device based on image recognition and voice recognition
US10402984B2 (en) Monitoring
JP4630146B2 (en) Position management system and position management program
JP4460528B2 (en) IDENTIFICATION OBJECT IDENTIFICATION DEVICE AND ROBOT HAVING THE SAME
CN110968083B (en) Method for constructing grid map, method, device and medium for avoiding obstacles
CN106775572B (en) Electronic device with microphone array and control method thereof
US20180286432A1 (en) Voice detection apparatus, voice detection method, and non-transitory computer-readable storage medium
CN111432115B (en) Face tracking method based on voice auxiliary positioning, terminal and storage device
CN102542247A (en) Information processing device, information processing method, and program
KR20160113857A (en) Robot cleaner, and robot cleaning system
US11605179B2 (en) System for determining anatomical feature orientation
JP7063760B2 (en) Mobile
CN113787517B (en) Self-moving robot control method, device, equipment and readable storage medium
CN108733059A (en) A kind of guide method and robot
CN111168685B (en) Robot control method, robot, and readable storage medium
CN111090412B (en) Volume adjusting method and device and audio equipment
CN113961009B (en) Obstacle avoidance method and device for sweeper, storage medium and electronic device
KR20120033414A (en) Robot for operating recognition of self-position and operation method thereof
CN107111363B (en) Method, device and system for monitoring
US11724397B2 (en) Robot and method for controlling the same
JP6890451B2 (en) Remote control system, remote control method and program
CN117941991A (en) Control method and device of cleaning equipment, storage medium and electronic device
CN110597077B (en) Method and system for realizing intelligent scene switching based on indoor positioning
CN111103807A (en) Control method and device for household terminal equipment
JP2019522187A (en) Apparatus and related methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination