CN110051289B - Voice control method and device for sweeping robot, robot and medium - Google Patents
Voice control method and device for sweeping robot, robot and medium Download PDFInfo
- Publication number
- CN110051289B CN110051289B CN201910265952.9A CN201910265952A CN110051289B CN 110051289 B CN110051289 B CN 110051289B CN 201910265952 A CN201910265952 A CN 201910265952A CN 110051289 B CN110051289 B CN 110051289B
- Authority
- CN
- China
- Prior art keywords
- robot
- voice
- sound source
- instruction
- voice instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000010408 sweeping Methods 0.000 title claims description 58
- 238000004140 cleaning Methods 0.000 claims abstract description 53
- 230000000875 corresponding effect Effects 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 230000009471 action Effects 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 14
- 230000001276 controlling effect Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 abstract description 14
- 238000001514 detection method Methods 0.000 description 22
- 239000000428 dust Substances 0.000 description 20
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 241001417527 Pempheridae Species 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005108 dry cleaning Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 238000005096 rolling process Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000009333 weeding Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002618 waking effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910052987 metal hydride Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052759 nickel Inorganic materials 0.000 description 1
- PXHVJJICTQNCMI-UHFFFAOYSA-N nickel Substances [Ni] PXHVJJICTQNCMI-UHFFFAOYSA-N 0.000 description 1
- -1 nickel metal hydride Chemical class 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47L—DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
- A47L11/00—Machines for cleaning floors, carpets, furniture, walls, or wall coverings
- A47L11/40—Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
- A47L11/4011—Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
- Electric Vacuum Cleaner (AREA)
Abstract
The embodiment of the application provides a robot voice control method, a device, a robot and a medium, wherein the robot voice control method comprises the following steps: receiving a first voice instruction; recognizing the sound source direction of the first voice instruction and steering the robot to the sound source direction; receiving a second voice instruction; and recognizing the sound source position of the second voice instruction and enabling the robot to move to the vicinity of the sound source position. The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user cleans the floor through the voice control robot at an appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot cleans the floor, and the like, so that the robot can perform purposeful work according to the intention of the user, and meanwhile, a sensor is added in the appointed position cleaning process as a positioning means, the position identification accuracy rate of the robot is increased, the working efficiency is improved, and the user experience is increased.
Description
Technical Field
The application relates to the technical field of control, in particular to a voice control method and device for a floor sweeping robot, the robot and a medium.
Background
With the development of technology, various robots having a voice recognition system have appeared, such as floor sweeping robots, floor mopping robots, dust collectors, weed trimmers, and the like. The robots can receive voice instructions input by a user through the voice recognition system to execute the operation indicated by the voice instructions, so that not only is labor saved, but also labor cost is saved.
In the related art, the robot may receive a voice command input by a user, and then recognize the input voice command through its own voice recognition system to control the robot to perform an operation instructed by the voice command. However, when the user is controlling the robot, the user still wants to be able to accurately control the robot to a specified position to do corresponding work, for example, instruct the sweeping robot to a specified place to perform cleaning (which is to be indicated). The existing mode that a mobile terminal controls a sweeping robot to sweep at a fixed point is that firstly, the robot needs to determine an indoor map, then the map is reflected on a mobile terminal of a mobile phone, after a user sees the indoor map on the mobile phone, the user clicks a position to be swept according to a relative direction, and then the robot moves to the position to carry out local sweeping.
However, this approach has the following drawbacks: on one hand, the robot must store an indoor map in advance, if the indoor layout changes (for example, the positions of a table, a chair, a bed cabinet and the like), the robot needs to re-identify and store a house map, so that the house map cannot be cleaned at fixed points at any time, the robot needs to update the map, upload the map to the server, send a new map to the mobile phone end of a user, and the user can click on an area to be cleaned according to the relative position in the new map. On the other hand, most sweeping robots cannot generate three-dimensional maps at present, most sweeping robots are two-dimensional and abstract, a user cannot accurately position an area to be swept in an actual room according to a map of a mobile phone terminal, and the position clicked on the map is often different from the actual position, so that user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method and an apparatus for controlling a robot by voice, a robot, and a storage medium, so that the robot can accurately work to a specified position according to a voice instruction.
In a first aspect, an embodiment of the present application provides a robot voice control method, where the method includes:
receiving a first voice instruction;
recognizing the sound source direction of the first voice instruction and steering the robot to the sound source direction;
receiving a second voice instruction;
and recognizing the sound source position of the second voice instruction and enabling the robot to move to the vicinity of the sound source position.
In some possible implementations, the recognizing a sound source direction of the first voice instruction and steering the robot to the sound source direction includes:
identifying a sound source direction of the first voice instruction;
turning the robot to the sound source direction without stopping the operation of the driving motor;
stopping the operation of the drive motor.
In some possible implementations, after the identifying the sound source position of the second voice instruction and moving the robot to the vicinity of the sound source position, the method further includes:
receiving a third voice instruction;
and identifying the content of the third voice instruction and executing corresponding action according to the content of the third voice instruction.
In some possible implementations, the recognizing the content of the third voice instruction and performing a corresponding action according to the content of the third voice instruction includes:
and recognizing that the third voice command is a command with a correct position, and starting to execute a local cleaning action by the robot.
In some possible implementations, the recognizing the content of the third voice instruction and performing a corresponding action according to the content of the third voice instruction includes:
recognizing that the third voice instruction is a position error instruction, and moving the robot to a sound source position continuously according to the sound source position of the position error instruction;
and starting the robot to perform the action of local cleaning until receiving the instruction of the correct position.
In some possible implementations, the recognizing a sound source position of the second voice instruction and moving the robot to the vicinity of the sound source position includes:
identifying a sound source position of the second voice instruction;
confirming the sound source position through a sensor;
moving the robot to the vicinity of the sound source position.
In some possible implementations, the moving the robot to the vicinity of the sound source position includes: the robot is moved to the vicinity of the sound source position at a moving speed faster than that at the time of cleaning.
In some possible implementations, the first voice instruction is a wake-up voice instruction, and the second voice instruction is a control voice instruction.
In some possible implementations, the wake-up voice command and the control voice command are pre-stored in the robot or a cloud connected to the robot.
In a second aspect, an embodiment of the present application provides a robot voice control apparatus, including:
the first receiving unit is used for receiving a first voice instruction;
a first recognition unit configured to recognize a sound source direction of the first voice instruction and steer the robot to the sound source direction;
the second receiving unit is used for receiving a second voice instruction;
a second recognition unit configured to recognize a sound source position of the second voice instruction and move the robot to the vicinity of the sound source position.
In some possible implementations, the first identifying unit is further configured to:
identifying a sound source direction of the first voice instruction;
turning the robot to the sound source direction without stopping the operation of the driving motor;
stopping the operation of the drive motor.
In some possible implementations, the method further includes:
a third receiving unit, configured to receive a third voice instruction;
and the third recognition unit is used for recognizing the content of the third voice instruction and executing corresponding action according to the content of the third voice instruction.
In some possible implementations, the third identifying unit is further configured to:
and recognizing that the third voice command is a command with a correct position, and starting to execute a local cleaning action by the robot.
In some possible implementations, the third identifying unit is further configured to:
recognizing that the third voice instruction is a position error instruction, and moving the robot to a sound source position continuously according to the sound source position of the position error instruction;
and starting the robot to perform the action of local cleaning until receiving the instruction of the correct position.
In some possible implementations, the second identifying unit is further configured to:
identifying a sound source position of the second voice instruction;
confirming the sound source position through a sensor;
moving the robot to the vicinity of the sound source position.
In some possible implementations, the moving the robot to the vicinity of the sound source position includes:
the robot is moved to the vicinity of the sound source position at a moving speed faster than that at the time of cleaning.
In some possible implementations, the first voice instruction is a wake-up voice instruction, and the second voice instruction is a control voice instruction.
In some possible implementations, the wake-up voice command and the control voice command are pre-stored in the robot or a cloud connected to the robot.
In a third aspect, an embodiment of the present application provides a robot voice control apparatus, including a processor and a memory, where the memory stores computer program instructions executable by the processor, and the processor implements the method steps as described in any one of the above when executing the computer program instructions.
In a fourth aspect, embodiments of the present application provide a robot including an apparatus as described in any one of the above.
In a fifth aspect, embodiments of the present application provide a non-transitory computer readable storage medium storing computer program instructions which, when invoked and executed by a processor, implement the method steps as recited in any of the above.
Compared with the prior art, the invention at least has the following technical effects:
the embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user cleans the floor through the voice control robot at an appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot cleans the floor, and the like, so that the robot can perform purposeful work according to the intention of the user, and meanwhile, a sensor is added in the appointed position cleaning process as a positioning means, the position identification accuracy rate of the robot is increased, the working efficiency is improved, and the user experience is increased.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a top view of a robot structure provided in an embodiment of the present application;
fig. 3 is a bottom view of a robot structure provided in an embodiment of the present application;
FIG. 4 is a front view of a robot structure provided by an embodiment of the present application;
fig. 5 is a perspective view of a robot structure provided in an embodiment of the present application;
FIG. 6 is a block diagram of a robot according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a robot voice control method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a robot voice control method according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a robot voice control device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a robot voice control device according to yet another embodiment of the present application;
fig. 11 is an electronic structural schematic diagram of a robot according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in the embodiments of the present application, these … … should not be limited to these terms. These terms are used only to distinguish … … from each other. For example, the first … … can also be referred to as the second … …, and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments herein.
To describe the behavior of the robot more clearly, the following directional definitions are made:
as shown in fig. 5, the robot 100 may travel over the ground through various combinations of movements relative to the following three mutually perpendicular axes defined by the body 110: a front-back axis X, a lateral axis Y, and a central vertical axis Z. The forward driving direction along the forward-rearward axis X is denoted as "forward", and the rearward driving direction along the forward-rearward axis X is denoted as "rearward". The transverse axis Y extends substantially along the axis defined by the center points of the drive wheel modules 141 between the right and left wheels of the robot.
The robot 100 may rotate about the Y-axis. "pitch up" when the forward portion of the robot 100 is tilted up and the backward portion is tilted down, and "pitch down" when the forward portion of the robot 100 is tilted down and the backward portion is tilted up. In addition, the robot 100 may rotate about the Z-axis. In the forward direction of the robot, the robot 100 is tilted to the right of the X axis as "right turn", and the robot 100 is tilted to the left of the X axis as "left turn".
Referring to fig. 1, a possible application scenario provided in the embodiment of the present application includes a robot, such as a sweeping robot, a mopping robot, a dust collector, a weeding machine, and the like. In some embodiments, the robot may be a robot, in particular a sweeping robot, a mopping robot. In implementation, the robot may be provided with a voice recognition system to receive a voice command sent by a user, and rotate according to the voice command in the direction of the arrow to respond to the voice command of the user. The robot may also be provided with a voice output device to output a prompt voice. In other embodiments, the robot may be provided with a touch-sensitive display to receive operation instructions input by a user. The robot can also be provided with wireless communication modules such as WIFI module, Bluetooth module to be connected with intelligent terminal, and receive the operating instruction that the user utilized intelligent terminal to transmit through wireless communication module.
The structure of the relevant robot is described below, as shown in fig. 2-5:
the robot 100 includes a robot body 110, a sensing system 120, a control system, a drive system 140, a cleaning system, an energy system, and a human-machine interaction system 170. As shown in fig. 2.
The machine body 110 includes a forward portion 111 and a rearward portion 112 having an approximately circular shape (circular front to rear), and may have other shapes including, but not limited to, an approximately D-shape with a front to rear circle.
As shown in fig. 4, the sensing system 120 includes a position determining device 121 located above the machine body 110, a bumper 122 located at the forward portion 111 of the machine body 110, a cliff sensor 123, and an ultrasonic sensor, an infrared sensor, a magnetometer, an accelerometer, a gyroscope, a odometer, etc., and provides various position information and motion state information of the machine to the control system 130. The position determining device 121 includes, but is not limited to, a camera, a laser distance measuring device (LDS). The following describes how position determination is performed by taking a laser distance measuring device of the triangulation method as an example. The basic principle of the triangulation method is based on the geometric relation of similar triangles, and is not described herein.
The laser ranging device includes a light emitting unit and a light receiving unit. The light emitting unit may include a light source that emits light, and the light source may include a light emitting element, such as an infrared or visible Light Emitting Diode (LED) that emits infrared light or visible light. Preferably, the light source may be a light emitting element that emits a laser beam. In the present embodiment, a Laser Diode (LD) is taken as an example of the light source. In particular, a light source using a laser beam may make the measurement more accurate than other lights due to the monochromatic, directional, and collimation characteristics of the laser beam. For example, infrared or visible light emitted by a Light Emitting Diode (LED) is affected by ambient environmental factors (e.g., color or texture of an object) as compared to a laser beam, and may be reduced in measurement accuracy. The Laser Diode (LD) may be a spot laser for measuring two-dimensional position information of an obstacle, or a line laser for measuring three-dimensional position information of an obstacle within a certain range.
The light receiving unit may include an image sensor on which a light spot reflected or scattered by an obstacle is formed. The image sensor may be a set of a plurality of unit pixels of a single row or a plurality of rows. These light receiving elements can convert optical signals into electrical signals. The image sensor may be a Complementary Metal Oxide Semiconductor (CMOS) sensor or a Charge Coupled Device (CCD) sensor, and is preferably a Complementary Metal Oxide Semiconductor (CMOS) sensor due to cost advantages. Also, the light receiving unit may include a light receiving lens assembly. Light reflected or scattered by the obstruction may travel through a light receiving lens assembly to form an image on the image sensor. The light receiving lens assembly may comprise a single or multiple lenses.
The base may support the light emitting unit and the light receiving unit, which are disposed on the base and spaced apart from each other by a certain distance. In order to measure the obstacle situation in the 360 degree direction around the robot, the base may be rotatably disposed on the main body 110, or the base itself may be rotated without rotating the emitted light, the received light by providing a rotating element. The rotating angular speed of the rotating element can be obtained by arranging the optical coupling element and the coded disc, the optical coupling element senses tooth gaps on the coded disc, and instantaneous angular speed can be obtained by dividing the sliding time of the tooth gap distance and the tooth gap distance value. The higher the density of the tooth notches on the coded disc is, the higher the measurement accuracy and precision are correspondingly, but the structure is more precise, and the calculated amount is higher; on the contrary, the smaller the density of the tooth defects is, the lower the accuracy and precision of measurement are, but the structure can be relatively simple, the calculation amount is smaller, and the cost can be reduced.
The data processing device, e.g. DSP, connected to the light receiving unit records and transmits the obstacle distance values at all angles in the direction of 0 degrees with respect to the robot to the data processing unit in the control system 130, e.g. Application Processor (AP) comprising CPU running a particle filter based positioning algorithm to obtain the current position of the robot and to map it according to this position for navigation. The positioning algorithm preferably uses instant positioning and mapping (SLAM).
Although the laser distance measuring device based on the triangulation method can measure the distance value at an infinite distance beyond a certain distance in principle, in practice, the realization of the long-distance measurement, for example, over 6 meters, is difficult, mainly because of the size limitation of the pixel unit on the sensor of the light receiving unit, and at the same time, the laser distance measuring device is also influenced by the photoelectric conversion speed of the sensor, the data transmission speed between the sensor and the connected DSP, and the calculation speed of the DSP. The measured value obtained by the laser ranging device under the influence of temperature can also have variation which cannot be tolerated by a system, mainly because the angle between incident light and emergent light is changed due to thermal expansion deformation of a structure between the light emitting unit and the light receiving unit, and the light emitting unit and the light receiving unit can also have the temperature drift problem. After the laser ranging device is used for a long time, the measurement result is also seriously influenced by deformation caused by accumulation of various factors such as temperature change, vibration and the like. The accuracy of the measuring result directly determines the accuracy of the map drawing, and is the basis for further strategy implementation of the robot, and is particularly important.
As shown in fig. 3, the forward portion 111 of the machine body 110 may carry a bumper 122, the bumper 122 detecting one or more events in the travel path of the robot 100 via a sensor system, such as an infrared sensor, as the drive wheel module 141 propels the robot across the floor during cleaning, and the robot may control the drive wheel module 141 to cause the robot to respond to the events, such as moving away from an obstacle, by detecting the events, such as an obstacle, a wall, by the bumper 122.
The control system 130 is disposed on a circuit board in the machine body 110, and includes a non-transitory memory, such as a hard disk, a flash memory, and a random access memory, a communication computing processor, such as a central processing unit, and an application processor, and the application processor uses a positioning algorithm, such as SLAM, to map an instant map of the environment where the robot is located according to the obstacle information fed back by the laser ranging device. And the current working state of the sweeper is comprehensively judged by combining distance information and speed information fed back by the buffer 122, the cliff sensor 123, the ultrasonic sensor, the infrared sensor, the magnetometer, the accelerometer, the gyroscope, the odometer and other sensing devices, for example, when the sweeper passes a threshold, a carpet is arranged at the cliff, the upper part or the lower part of the sweeper is clamped, a dust box is full, the sweeper is taken up and the like, and a specific next-step action strategy is provided according to different conditions, so that the robot can work more according with the requirements of an owner, and better user experience is achieved. Further, the control system 130 can plan the most efficient and reasonable cleaning path and cleaning mode based on map information drawn by the SLAM, thereby greatly improving the cleaning efficiency of the robot.
The drive system 140 may steer the robot 100 across the ground based on drive commands having distance and angle information, such as x, y, and theta components. The drive system 140 includes a drive wheel module 141, and the drive wheel module 141 can control both the left and right wheels, and in order to more precisely control the motion of the machine, it is preferable that the drive wheel module 141 includes a left drive wheel module and a right drive wheel module, respectively. The left and right drive wheel modules are opposed along a transverse axis defined by the body 110. In order for the robot to be able to move more stably or with greater mobility over the ground, the robot may include one or more driven wheels 142, including but not limited to universal wheels. The driving wheel module comprises a traveling wheel, a driving motor and a control circuit for controlling the driving motor, and can also be connected with a circuit for measuring driving current and a milemeter. The driving wheel module 141 may be detachably coupled to the main body 110 to facilitate disassembly and maintenance. The drive wheel may have a biased drop-type suspension system movably secured, e.g., rotatably attached, to the robot body 110 and receiving a spring bias biased downward and away from the robot body 110. The spring bias allows the drive wheels to maintain contact and traction with the floor with a certain landing force while the cleaning elements of the robot 100 also contact the floor 10 with a certain pressure.
The cleaning system may be a dry cleaning system and/or a wet cleaning system. As a dry cleaning system, the main cleaning function is derived from the sweeping system 151 constituted by the roll brush, the dust box, the blower, the air outlet, and the connecting members therebetween. The rolling brush with certain interference with the ground sweeps the garbage on the ground and winds the garbage to the front of a dust suction opening between the rolling brush and the dust box, and then the garbage is sucked into the dust box by air which is generated by the fan and passes through the dust box and has suction force. The dust removal capability of the sweeper can be represented by the sweeping efficiency DPU (dust pick up efficiency), which is influenced by the structure and the material of the rolling brush, the wind power utilization rate of an air duct formed by a dust suction port, a dust box, a fan, an air outlet and connecting parts among the dust suction port, the dust box, the fan, the air outlet and the dust box, the type and the power of the fan, and is a responsible system design problem. Compared with the common plug-in dust collector, the improvement of the dust removal capability is more significant for the cleaning robot with limited energy. Because the improvement of the dust removal capability directly and effectively reduces the energy requirement, namely the machine which can clean the ground of 80 square meters by charging once can be developed into the machine which can clean 100 square meters or more by charging once. And the service life of the battery, which reduces the number of times of charging, is also greatly increased, so that the frequency of replacing the battery by the user is also increased. More intuitively and importantly, the improvement of the dust removal capability is the most obvious and important user experience, and the user can directly draw a conclusion whether the sweeping/wiping is clean. The dry cleaning system may also include an edge brush 152 having an axis of rotation that is angled relative to the floor for moving debris into the roller brush area of the cleaning system.
Energy systems include rechargeable batteries, such as nickel metal hydride batteries and lithium batteries. The charging battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit and a battery under-voltage monitoring circuit, and the charging control circuit, the battery pack charging temperature detection circuit and the battery under-voltage monitoring circuit are connected with the single chip microcomputer control circuit. The host computer is connected with the charging pile through the charging electrode arranged on the side or the lower part of the machine body for charging. If dust is attached to the exposed charging electrode, the plastic body around the electrode is melted and deformed due to the accumulation effect of electric charge in the charging process, even the electrode itself is deformed, and normal charging cannot be continued.
The human-computer interaction system 170 comprises keys on a panel of the host computer, and the keys are used for a user to select functions; the machine control system can further comprise a display screen and/or an indicator light and/or a loudspeaker, wherein the display screen, the indicator light and the loudspeaker show the current state or function selection item of the machine to a user; and a mobile phone client program can be further included. For the path navigation type cleaning equipment, a map of the environment where the equipment is located and the position of a machine can be displayed for a user at a mobile phone client, and richer and more humanized function items can be provided for the user.
Figure 6 is a block diagram of a sweeping robot according to the present invention.
The sweeping robot according to the current embodiment may include: a microphone array unit for recognizing a user's voice, a communication unit for communicating with a remote control device or other devices, a moving unit for driving the main body, a cleaning unit, and a memory unit for storing information. An input unit (a key of the sweeping robot, etc.), an object detection sensor, a charging unit, a microphone array unit, a direction detection unit, a position detection unit, a communication unit, a driving unit, and a memory unit may be connected to the control unit to transmit or receive predetermined information to or from the control unit.
The microphone array unit may compare the voice input through the receiving unit with information stored in the memory unit to determine whether the input voice corresponds to a specific command. If it is determined that the input voice corresponds to a specific command, the corresponding command is transmitted to the control unit. If the detected speech cannot be compared to the information stored in the memory unit, the detected speech may be treated as noise to ignore the detected speech.
For example, the detected voice corresponds to the word "come, go", and there is a word control command (come) corresponding to the word stored in the information of the memory unit. In this case, a corresponding command may be transmitted to the control unit.
The direction detecting unit may detect the direction of the voice by using a time difference or a level of the voice input to the plurality of receiving units. The direction detection unit transmits the detected direction of the voice to the control unit. The control unit may determine the moving path by using the voice direction detected by the direction detecting unit.
The position detection unit may detect coordinates of the subject within predetermined map information. In one embodiment, the information detected by the camera and the map information stored in the memory unit may be compared with each other to detect the current position of the subject. The position detection unit may use a Global Positioning System (GPS) in addition to the camera.
In a broad sense, the position detection unit may detect whether the main body is disposed at a specific position. For example, the position detection unit may include a unit for detecting whether the main body is disposed on the charging pile.
For example, in the method for detecting whether the main body is disposed on the charging pile, whether the main body is disposed at the charging position may be detected according to whether power is input into the charging unit. For another example, whether the main body is disposed at the charging position may be detected by a charging position detecting unit disposed on the main body or the charging pile.
The communication unit may transmit/receive predetermined information to/from a remote control device or other devices. The communication unit may update map information of the sweeping robot.
The driving unit may operate the moving unit and the cleaning unit. The driving unit may move the moving unit along the moving path determined by the control unit.
The memory unit stores therein predetermined information related to the operation of the sweeping robot. For example, map information of an area where the sweeping robot is arranged, control command information corresponding to a voice recognized by the microphone array unit, direction angle information detected by the direction detection unit, position information detected by the position detection unit, and obstacle information detected by the object detection sensor may be stored in the memory unit.
The control unit may receive information detected by the receiving unit, the camera, and the object detection sensor. The control unit may recognize a voice of the user, detect a direction in which the voice occurs, and detect a position of the sweeping robot based on the transmitted information. Further, the control unit may also operate the moving unit and the cleaning unit.
An embodiment, as shown in fig. 7, is applied to a robot in the application scenario of fig. 1, and an embodiment of the present application provides a robot voice control method, where the method includes the following steps:
step S702: receiving a first voice instruction;
generally, a speech recognition system of a robot may have a sleep state and an active state. For example, when the robot is in a working state or an unused state, the voice recognition system is in a dormant state, and in the dormant state, the voice recognition system hardly occupies too much resources of the robot and does not recognize other voice commands except the first voice command.
If the speech recognition system in the dormant state receives the first speech instruction, the speech recognition system is switched from the dormant state to the activated state. In the activated state, the voice recognition system may recognize a voice instruction, such as a first voice instruction, a second voice instruction, and so on, configured in the voice recognition system.
Specifically, the first voice instruction: for waking up the speech recognition system, i.e. for indicating that the speech recognition system is in an active state. In implementation, if the speech recognition system is in the dormant state, when the robot receives the first speech command, the speech recognition system is switched from the dormant state to the activated state. If the voice recognition system is in the activated state, the voice recognition system may be controlled to continue to be in the activated state, or no operation may be performed. In a specific embodiment, the first voice command may be set by a user or by default, for example: the first voice command can be user-defined 'turn on voice', 'turn on', 'go here', etc. The first voice instruction (awakening voice instruction) is stored in the robot or a cloud end connected with the robot in advance. For convenience of description, the following description will take the first voice command as "come" as an example.
Step S704: recognizing the sound source direction of the first voice instruction and steering the robot to the sound source direction;
after the robot has received the first activation instruction such as "come", the robot detects the direction of the voice by the direction detection unit, for example, by using the time difference or level of the voice input to the plurality of receiving units. The direction detection unit transmits the detected direction of the voice to the control unit. The control unit may control the driving system by using the voice direction detected by the direction detecting unit to make the robot perform a pivot rotation or the like so that the robot forward direction is turned toward the user's sound source direction. The man-machine interaction is similar to that after a dry and alive person is called by a person, the person stops working in the hand and turns to a conversation state, so that the man-machine interaction is more humanized.
In some possible implementations, the recognizing a sound source direction of the first voice command and steering the robot to the sound source direction specifically includes the following steps:
step S7042: identifying a sound source direction of the first voice instruction;
step S7044: turning the robot to the sound source direction without stopping the operation of the driving motor;
after the robot has received the first activation instruction such as "come", the robot detects the direction of the voice by the direction detection unit, for example, by using the time difference or level of the voice input to the plurality of receiving units. The direction detection unit transmits the detected direction of the voice to the control unit. The control unit may control the driving system by using the voice direction detected by the direction detecting unit to make the robot perform a pivot rotation or the like so that the robot forward direction is turned toward the user's sound source direction. In the process, the robot does not stop working, and the cleaning motor is still in a starting state.
Step S7046: stopping the operation of the drive motor.
After the robot rotates to the sound source direction, the robot only reserves the voice recognition system in the activated state for all the driving systems, and at the moment, the robot is in a complete standby state, and whether a control command is sent out is detected in real time.
According to the embodiment of the application, when the first voice command is received, the voice recognition system of the robot is enabled to be in the activated state, and the robot is controlled to turn to the voice source direction, so that the robot is in the standby state. And then receiving a control command for controlling the execution operation within a certain time, and carrying out expected action according to the control command. The voice recognition control method and the voice recognition control device can be accurately operated according to the indication, improve the voice recognition control effect under the noise state, improve the recognition rate of the voice command input by the user, can accurately work according to the voice command of the user, and also increase the interestingness of man-machine interaction.
Step S706: receiving a second voice instruction;
the second voice instruction: for instructing operations, i.e. instructing the control robot to perform operations. The operation may be an operation of a custom setting, or may be a default setting of the system, for example: cleaning operations, mopping operations, weeding operations, and the like. In a specific embodiment, the second voice command may be set by a user or by default, for example: the second voice command may be a user-defined "clean here," "clean up to go," "here," etc. The second voice instruction (control type voice instruction) is stored in the robot or a cloud end connected with the robot in advance. For convenience of description, the following description will be given by taking the second voice command "to sweep here" as an example.
In an optional implementation case, the receiving of the second voice instruction may be whether the second voice instruction for instructing the operation is received, and this may be determined within a preset time period, for example, 1 minute, 2 minutes, and the time period may be preset by the touch device. According to the situation monitored in the preset time range, the following two situations are respectively executed.
In the first case, if it is determined that the second voice command is received, the robot executes an operation instructed by the second voice command.
For example, a control command of "clean here" is detected within 1 minute, and the robot moves in the direction of the sound source according to the user's command until a third voice control command is received.
In the second case, if it is determined that the second voice command is not received, the robot returns to the original direction and continues to perform the original operation.
For example, a control command of "to clean here" is not monitored within 1 minute, and the robot performs cleaning in the original cleaning direction or position until the first voice control command is received again.
Step S708: and recognizing the sound source position of the second voice instruction and enabling the robot to move to the vicinity of the sound source position.
For example, if a control command of "cleaning here" is recognized, the robot moves in the direction of the sound source according to the user's command until it moves to the sound source position or a third voice control command is received.
The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user can sweep the floor through the voice control to clean the appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot can clean the room, and the like, so that the robot can work purposefully according to the intention of the user, the working efficiency is improved, and the user experience is increased.
In another embodiment, as shown in fig. 8, applied to a robot in the application scenario of fig. 1, an embodiment of the present application provides a robot voice control method, including the following steps:
step S802: receiving a first voice instruction;
generally, a speech recognition system of a robot may have a sleep state and an active state. For example, when the robot is in a working state or an unused state, the voice recognition system is in a dormant state, and in the dormant state, the voice recognition system hardly occupies too much resources of the robot and does not recognize other voice commands except the first voice command.
If the speech recognition system in the dormant state receives the first speech instruction, the speech recognition system is switched from the dormant state to the activated state. In the activated state, the voice recognition system may recognize a voice instruction, such as a first voice instruction, a second voice instruction, and so on, configured in the voice recognition system.
Specifically, the first voice instruction: for waking up the speech recognition system, i.e. for indicating that the speech recognition system is in an active state. In implementation, if the speech recognition system is in the dormant state, when the robot receives the first speech command, the speech recognition system is switched from the dormant state to the activated state. If the voice recognition system is in the activated state, the voice recognition system may be controlled to continue to be in the activated state, or no operation may be performed. In a specific embodiment, the first voice command may be set by a user or by default, for example: the first voice command can be user-defined 'turn on voice', 'turn on', 'go here', etc. The first voice instruction (awakening voice instruction) is stored in the robot or a cloud end connected with the robot in advance. For convenience of description, the following description will take the first voice command as "come" as an example.
Step S804: recognizing the sound source direction of the first voice instruction and steering the robot to the sound source direction;
after the robot has received the first activation instruction such as "come", the robot detects the direction of the voice by the direction detection unit, for example, by using the time difference or level of the voice input to the plurality of receiving units. The direction detection unit transmits the detected direction of the voice to the control unit. The control unit may control the driving system by using the voice direction detected by the direction detecting unit to make the robot perform a pivot rotation or the like so that the robot forward direction is turned toward the user's sound source direction. The man-machine interaction is similar to that after a dry and alive person is called by a person, the person stops working in the hand and turns to a conversation state, so that the man-machine interaction is more humanized.
In some possible implementations, the recognizing a sound source direction of the first voice command and steering the robot to the sound source direction specifically includes the following steps:
step S8042: identifying a sound source direction of the first voice instruction;
step S8044: turning the robot to the sound source direction without stopping the operation of the driving motor;
after the robot has received the first activation instruction such as "come", the robot detects the direction of the voice by the direction detection unit, for example, by using the time difference or level of the voice input to the plurality of receiving units. The direction detection unit transmits the detected direction of the voice to the control unit. The control unit may control the driving system by using the voice direction detected by the direction detecting unit to make the robot perform a pivot rotation or the like so that the robot forward direction is turned toward the user's sound source direction. In the process, the robot does not stop working, and the cleaning motor is still in a starting state.
Step S8046: stopping the operation of the drive motor.
After the robot rotates to the sound source direction, the robot only reserves the voice recognition system in the activated state for all the driving systems, and at the moment, the robot is in a complete standby state, and whether a control command is sent out is detected in real time.
According to the embodiment of the application, when the first voice command is received, the voice recognition system of the robot is enabled to be in the activated state, and the robot is controlled to turn to the voice source direction, so that the robot is in the standby state. And then receiving a control command for controlling the execution operation within a certain time, and carrying out expected action according to the control command. The voice recognition control method and the voice recognition control device can be accurately operated according to the indication, improve the voice recognition control effect under the noise state, improve the recognition rate of the voice command input by the user, can accurately work according to the voice command of the user, and also increase the interestingness of man-machine interaction.
Step S806: receiving a second voice instruction;
the second voice instruction: for instructing operations, i.e. instructing the control robot to perform operations. The operation may be an operation of a custom setting, or may be a default setting of the system, for example: cleaning operations, mopping operations, weeding operations, and the like. In a specific embodiment, the second voice command may be set by a user or by default, for example: the second voice command may be a user-defined "clean here," "clean up to go," "here," etc. The second voice instruction (control type voice instruction) is stored in the robot or a cloud end connected with the robot in advance. For convenience of description, the following description will be given by taking the second voice command "to sweep here" as an example.
In an optional implementation case, the receiving of the second voice instruction may be whether the second voice instruction for instructing the operation is received, and this may be determined within a preset time period, for example, 1 minute, 2 minutes, and the time period may be preset by the touch device. According to the situation monitored in the preset time range, the following two situations are respectively executed.
In the first case, step S808: and if the second voice instruction is judged to be received, the robot executes the operation indicated by the second voice instruction.
For example, a control command of "clean here" is detected within 1 minute, and the robot moves in the direction of the sound source according to the user's command until a third voice control command is received.
In the second case, step S810: and if the second voice instruction is judged not to be received, the robot returns to the original direction and continues to execute the original operation.
For example, a control command of "to clean here" is not monitored within 1 minute, and the robot performs cleaning in the original cleaning direction or position until the first voice control command is received again.
Step S808: and recognizing the sound source position of the second voice instruction and enabling the robot to move to the vicinity of the sound source position.
In some possible implementations, to improve the user experience, the robot is moved to the vicinity of the sound source position at a faster moving speed than at the time of sweeping. For example, the robot is moved to the vicinity of the sound source position at a moving speed of 1.5 to 3 times (preferably 1.5 to 2 times). In the process, the obstacle avoidance still plays a role in speed reduction when meeting the obstacle, and the dangerous factors caused by too high speed are prevented from occurring.
Because the distance is determined according to the sound source, certain errors can be generated under the influence of various factors such as different decibels of the sound source, reflection of sound signals of indoor obstacles and the like. In order to further improve the positioning accuracy of the sound source position, after the robot listens to the sound source command, the robot carries a sensor device, such as a camera, a displacement sensor and the like, and the distance of the sound source is confirmed through recognition, so that the accurate positioning is greatly improved.
Specifically, in some possible implementations, the recognizing a sound source position of the second voice instruction and moving the robot to a vicinity of the sound source position includes: identifying a sound source position of the second voice instruction; confirming the sound source position through a sensor; moving the robot to the vicinity of the sound source position.
It should be noted that the auxiliary sensor is not necessary, the distance is determined according to the sound source, and machine learning can be performed according to the corresponding relationship between indoor conventional voice decibel and distance before leaving the factory through machine learning, and the learning model is recorded into the robot memory. As long as the user adopts the conventional decibel to send out the voice command in a less complex indoor environment, the robot can basically reach the position close to the sound source, and then sends out a secondary confirmation request after reaching the position close to the sound source, so that the requirement can be basically met.
Step S812: receiving a third voice instruction;
a third voice instruction: and the operation is used for confirming the operation, namely confirming whether the robot executes the second voice instruction correctly or not. The operation may be an operation of a custom setting, or may be a default setting of the system, for example: correct position, wrong position, OK, continue cleaning, reposition, etc. In a specific embodiment, the third voice command may be set by a user or by default, for example: the third voice command can be user-defined 'correct position', 'wrong position', 'OK' and the like. The third voice command (confirmation-like voice command) is stored in advance in the robot or a cloud end connected with the robot. For convenience of description, the following description will take two types of "correct position" and "incorrect position" as examples of the second voice command.
Step S814: and identifying the content of the third voice instruction and executing corresponding action according to the content of the third voice instruction.
In some possible implementations, the recognizing the content of the third voice instruction and performing the corresponding action according to the content of the third voice instruction include the following two cases:
in the first case, the third voice command is recognized as a correct position command, and the user confirms that the robot is correctly positioned by the "correct position" voice command, and the robot starts to perform a local cleaning operation, for example, a cleaning operation is performed at a position near the user.
In the second case, it is recognized that the third voice command is a wrong-position type command, the user denies the position of the robot by a wrong-position voice command, the robot continues to move to the sound source position according to the sound source position of the wrong-position command until the robot moves to the vicinity of the wrong-position sound source position to perform a reconfirmation process, or the robot starts to perform a local cleaning operation until a correct-position type command is received, and performs a cleaning operation at a position in the vicinity of the user.
The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user cleans the floor through the voice control robot at an appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot cleans the floor, and the like, so that the robot can perform purposeful work according to the intention of the user, and meanwhile, a sensor is added in the appointed position cleaning process as a positioning means, the position identification accuracy rate of the robot is increased, the working efficiency is improved, and the user experience is increased.
In another embodiment, as shown in fig. 9, in combination with the robot applied to the application scenario of fig. 1, an embodiment of the present application provides a robot voice control apparatus, which includes a first receiving unit 902, a first recognition unit 904, a second receiving unit 906, and a second recognition unit 908, each of which is described as follows. The apparatus shown in fig. 9 can execute the method of the embodiment shown in fig. 7, and reference may be made to the related description of the embodiment shown in fig. 7 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution are described in the embodiment shown in fig. 7, and are not described herein again.
A first receiving unit 902, configured to receive a first voice instruction;
a first recognition unit 904 for recognizing a sound source direction of the first voice instruction and turning the robot to the sound source direction;
a second receiving unit 906, configured to receive a second voice instruction;
a second recognition unit 908 that recognizes a sound source position of the second voice command and moves the robot to the vicinity of the sound source position.
For example, if a control command of "cleaning here" is recognized, the robot moves in the direction of the sound source according to the user's command until it moves to the sound source position or a third voice control command is received.
The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user can sweep the floor through the voice control to clean the appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot can clean the room, and the like, so that the robot can work purposefully according to the intention of the user, the working efficiency is improved, and the user experience is increased.
In another embodiment, as shown in fig. 10, in combination with the robot applied to the application scenario of fig. 1, the present application provides a robot voice control apparatus, which includes a first receiving unit 1002, a first recognition unit 1004, a second receiving unit 1006, a second recognition unit 1008, a third receiving unit 1010, and a fourth recognition unit 1012, each of which is described as follows. The apparatus shown in fig. 10 can perform the method of the embodiment shown in fig. 8, and reference may be made to the related description of the embodiment shown in fig. 8 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 8, and are not described herein again.
A first receiving unit 1002, configured to receive a first voice instruction;
a first recognition unit 1004 for recognizing a sound source direction of the first voice instruction and turning the robot to the sound source direction;
a second receiving unit 1006, configured to receive a second voice instruction;
a second recognition unit 1008 configured to recognize a sound source position of the second voice instruction and move the robot to the vicinity of the sound source position.
A third receiving unit 1010, configured to receive a third voice instruction;
a third recognition unit 1012, configured to recognize content of the third voice instruction and execute a corresponding action according to the content of the third voice instruction.
In some possible implementations, the recognizing the content of the third voice instruction and performing the corresponding action according to the content of the third voice instruction include the following two cases:
in the first case, the third voice command is recognized as a correct position command, and the user confirms that the robot is correctly positioned by the "correct position" voice command, and the robot starts to perform a local cleaning operation, for example, a cleaning operation is performed at a position near the user.
In the second case, it is recognized that the third voice command is a wrong-position type command, the user denies the position of the robot by a wrong-position voice command, the robot continues to move to the sound source position according to the sound source position of the wrong-position command until the robot moves to the vicinity of the wrong-position sound source position to perform a reconfirmation process, or the robot starts to perform a local cleaning operation until a correct-position type command is received, and performs a cleaning operation at a position in the vicinity of the user.
In some possible implementations, the first identifying unit 1002 is further configured to:
identifying a sound source direction of the first voice instruction;
turning the robot to the sound source direction without stopping the operation of the driving motor;
stopping the operation of the drive motor.
In some possible implementations, the third identifying unit 1010 is further configured to:
and recognizing that the third voice command is a command with a correct position, and starting to execute a local cleaning action by the robot.
In some possible implementations, the third identifying unit 1010 is further configured to:
recognizing that the third voice instruction is a position error instruction, and moving the robot to a sound source position continuously according to the sound source position of the position error instruction;
and starting the robot to perform the action of local cleaning until receiving the instruction of the correct position.
In some possible implementations, the second identifying unit 1006 is further configured to:
identifying a sound source position of the second voice instruction;
confirming the sound source position through a sensor;
moving the robot to the vicinity of the sound source position.
In some possible implementations, the moving the robot to the vicinity of the sound source position includes:
the robot is moved to the vicinity of the sound source position at a moving speed faster than that at the time of cleaning.
In some possible implementations, the first voice instruction is a wake-up voice instruction, and the second voice instruction is a control voice instruction.
In some possible implementations, the wake-up voice command and the control voice command are pre-stored in the robot or a cloud connected to the robot.
The embodiment of the application can adopt a voice appointed mode, so that the robot can accurately work according to the instruction of the user, the user cleans the floor through the voice control robot at an appointed position, for example, a bedroom is cleaned, a living room is cleaned, the robot cleans the floor, and the like, so that the robot can perform purposeful work according to the intention of the user, and meanwhile, a sensor is added in the appointed position cleaning process as a positioning means, the position identification accuracy rate of the robot is increased, the working efficiency is improved, and the user experience is increased.
An embodiment of the present application provides a robot including the robot voice control device as described above.
The embodiment of the present application provides a robot, which includes a processor and a memory, where the memory stores computer program instructions capable of being executed by the processor, and when the processor executes the computer program instructions, the method steps of any of the foregoing embodiments are implemented.
Embodiments of the present application provide a non-transitory computer readable storage medium storing computer program instructions which, when invoked and executed by a processor, implement the method steps of any of the preceding embodiments.
As shown in fig. 11, robot 1100 may include a processing device (e.g., central processor, graphics processor, etc.) 1101 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1102 or a program loaded from storage device 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the electronic robot 1100 are also stored. The processing device 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
Generally, the following devices may be connected to the I/O interface 1105: input devices 1106 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 1107 including, for example, Liquid Crystal Displays (LCDs), speakers, vibrators, and the like; storage devices 1108, including, for example, magnetic tape, hard disk, etc.; and a communication device 1109. The communication means 1109 can allow the electronic robot 1100 to perform wireless or wired communication with other robots to exchange data. While fig. 7 illustrates an electronic robot 1100 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication device 1109, or installed from the storage device 1108, or installed from the ROM 1102. The computer program, when executed by the processing device 1101, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the robot; or may be separate and not assembled into the robot.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (17)
1. A voice control method for a sweeping robot is characterized by comprising the following steps:
the sweeping robot receives a first voice instruction, and comprises a driving motor and a sweeping motor;
recognizing the sound source direction of the first voice command, and driving the sweeping robot to turn to the sound source direction by the driving motor while the sweeping motor is not stopped;
stopping the operation of the cleaning motor and the driving motor, and receiving a second voice command;
recognizing the sound source position of the second voice command and driving the sweeping robot to move to the vicinity of the sound source position by the driving motor;
receiving a third voice instruction;
and identifying the content of the third voice instruction to confirm whether the sweeping robot executes the second voice instruction correctly or not, and executing corresponding action according to the content of the third voice instruction.
2. The method of claim 1, wherein the recognizing the content of the third voice command to confirm whether the sweeping robot executes the second voice command correctly or not and executing corresponding actions according to the content of the third voice command comprises:
and recognizing that the third voice command is a command with a correct position, and starting to execute a local cleaning action by the sweeping robot.
3. The method of claim 1, wherein the recognizing the content of the third voice command to confirm whether the sweeping robot executes the second voice command correctly or not and executing corresponding actions according to the content of the third voice command comprises:
recognizing that the third voice instruction is a position error instruction, and continuing to move the sweeping robot to a sound source position according to the sound source position of the position error instruction;
and starting the sweeping robot to execute the action of local sweeping until the position correct type instruction is received.
4. The method according to any one of claims 1-3, wherein the recognizing the sound source position of the second voice command and the driving the sweeping robot by the driving motor to move to the vicinity of the sound source position comprises:
identifying a sound source position of the second voice instruction;
confirming the sound source position through a sensor;
and driving the sweeping robot to move to the position near the sound source by the driving motor.
5. The method of claim 4, wherein said driving the sweeping robot by the drive motor to move to the vicinity of the sound source location comprises:
and driving the sweeping robot to move to the vicinity of the sound source position at a moving speed higher than that during sweeping by the driving motor.
6. The method of claim 4, wherein:
the first voice instruction is a wake-up voice instruction, and the second voice instruction is a control voice instruction.
7. The method of claim 6, wherein: the awakening voice instruction and the control voice instruction are stored in the sweeping robot or a cloud end connected with the sweeping robot in advance.
8. The utility model provides a voice control device of robot sweeps floor which characterized in that includes:
the first receiving unit is used for receiving a first voice instruction;
a first recognition unit for recognizing a sound source direction of the first voice instruction;
the driving system comprises a driving motor and a cleaning motor, and is used for controlling the cleaning motor to continuously work after the first recognition unit recognizes the sound source direction of the first voice instruction, controlling the driving motor to drive the sweeping robot to turn to the sound source direction at the same time, and controlling the cleaning motor and the driving motor to stop running after the turning is finished;
the second receiving unit is used for receiving a second voice instruction;
a second recognition unit configured to recognize a sound source position of the second voice instruction;
the driving system is further used for controlling the driving motor to drive the sweeping robot to move to the position near the sound source after the second recognition unit recognizes the sound source position of the second voice instruction;
a third receiving unit, configured to receive a third voice instruction;
the third recognition unit is used for recognizing the content of the third voice instruction so as to confirm whether the sweeping robot executes the second voice instruction correctly or not;
the driving system is further used for driving the sweeping robot to execute corresponding actions according to the content of the third voice instruction recognized by the recognition unit.
9. The apparatus of claim 8,
the third identification unit is further configured to: recognizing the third voice instruction as a position correct instruction;
the driving system is also used for controlling the driving motor to drive the sweeping robot to start to execute the action of local sweeping after the third voice command is recognized as the position correct command by the third recognition unit.
10. The apparatus of claim 8,
the third identification unit is further configured to: recognizing the third voice instruction as a position error type instruction,
the driving system is also used for controlling the driving motor to drive the sweeping robot to move to the sound source position continuously according to the sound source position of the position error instruction after the third voice instruction is recognized as the position error instruction by the third recognition unit; and after the third identification unit receives the position accuracy instruction, the driving motor is controlled to drive the sweeping robot to start to execute the action of local sweeping.
11. The apparatus according to any of claims 8-10, wherein the second identification unit is further configured to:
identifying a sound source position of the second voice instruction;
confirming the sound source position through a sensor.
12. The apparatus of claim 8, wherein the driving system controls the driving motor to drive the sweeping robot to move to the vicinity of the sound source position, and comprises:
and the driving system controls the driving motor to drive the sweeping robot to move to the position near the sound source at a moving speed higher than that during sweeping.
13. The apparatus of claim 8, wherein:
the first voice instruction is a wake-up voice instruction, and the second voice instruction is a control voice instruction.
14. The apparatus of claim 13, wherein: the awakening voice instruction and the control voice instruction are stored in the sweeping robot or a cloud end connected with the sweeping robot in advance.
15. A voice control device for a sweeping robot, comprising a processor and a memory, said memory storing computer program instructions executable by said processor, said processor implementing the method steps of any one of claims 1 to 7 when executing said computer program instructions.
16. A sweeping robot comprising a device as claimed in any one of claims 8 to 14.
17. A non-transitory computer readable storage medium having stored thereon computer program instructions which, when invoked and executed by a processor, perform the method steps of any of claims 1-7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210225162.XA CN114468898B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method, device, robot and medium |
CN201910265952.9A CN110051289B (en) | 2019-04-03 | 2019-04-03 | Voice control method and device for sweeping robot, robot and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910265952.9A CN110051289B (en) | 2019-04-03 | 2019-04-03 | Voice control method and device for sweeping robot, robot and medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210225162.XA Division CN114468898B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method, device, robot and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110051289A CN110051289A (en) | 2019-07-26 |
CN110051289B true CN110051289B (en) | 2022-03-29 |
Family
ID=67318233
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910265952.9A Active CN110051289B (en) | 2019-04-03 | 2019-04-03 | Voice control method and device for sweeping robot, robot and medium |
CN202210225162.XA Active CN114468898B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method, device, robot and medium |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210225162.XA Active CN114468898B (en) | 2019-04-03 | 2019-04-03 | Robot voice control method, device, robot and medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110051289B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110379424B (en) * | 2019-07-29 | 2021-11-02 | 方毅 | Method for controlling accurate point reaching through voice |
WO2021022420A1 (en) * | 2019-08-02 | 2021-02-11 | 深圳市无限动力发展有限公司 | Audio collection method, apparatus, and mobile robot |
CN110428850A (en) * | 2019-08-02 | 2019-11-08 | 深圳市无限动力发展有限公司 | Voice pick-up method, device, storage medium and mobile robot |
CN117398023A (en) * | 2019-11-19 | 2024-01-16 | 科沃斯机器人股份有限公司 | Self-moving robot following method and self-moving robot |
CN110881909A (en) * | 2019-12-20 | 2020-03-17 | 小狗电器互联网科技(北京)股份有限公司 | Control method and device of sweeper |
CN110946518A (en) * | 2019-12-20 | 2020-04-03 | 小狗电器互联网科技(北京)股份有限公司 | Control method and device of sweeper |
CN111261012B (en) * | 2020-01-19 | 2022-01-28 | 佛山科学技术学院 | Pneumatic teaching trolley |
CN111358368A (en) * | 2020-03-05 | 2020-07-03 | 宁波大学 | Manual guide type floor sweeping robot |
CN112155485B (en) * | 2020-09-14 | 2023-02-28 | 美智纵横科技有限责任公司 | Control method, control device, cleaning robot and storage medium |
CN113739322A (en) * | 2021-08-20 | 2021-12-03 | 科沃斯机器人股份有限公司 | Purifier and control method thereof |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001057853A1 (en) * | 2000-01-31 | 2001-08-09 | Japan Science And Technology Corporation | Robot auditory device |
CN104934033A (en) * | 2015-04-21 | 2015-09-23 | 深圳市锐曼智能装备有限公司 | Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification |
CN106328132A (en) * | 2016-08-15 | 2017-01-11 | 歌尔股份有限公司 | Voice interaction control method and device for intelligent equipment |
CN108814449A (en) * | 2018-07-30 | 2018-11-16 | 马鞍山问鼎网络科技有限公司 | A kind of artificial intelligence sweeping robot control method based on phonetic order |
CN109346069A (en) * | 2018-09-14 | 2019-02-15 | 北京赋睿智能科技有限公司 | A kind of interactive system and device based on artificial intelligence |
CN109358751A (en) * | 2018-10-23 | 2019-02-19 | 北京猎户星空科技有限公司 | A kind of wake-up control method of robot, device and equipment |
CN109377991A (en) * | 2018-09-30 | 2019-02-22 | 珠海格力电器股份有限公司 | Intelligent equipment control method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3771812B2 (en) * | 2001-05-28 | 2006-04-26 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Robot and control method thereof |
KR101356165B1 (en) * | 2012-03-09 | 2014-01-24 | 엘지전자 주식회사 | Robot cleaner and controlling method of the same |
CN105957521B (en) * | 2016-02-29 | 2020-07-10 | 青岛克路德机器人有限公司 | Voice and image composite interaction execution method and system for robot |
CN109093627A (en) * | 2017-06-21 | 2018-12-28 | 富泰华工业(深圳)有限公司 | intelligent robot |
CN109202897A (en) * | 2018-08-07 | 2019-01-15 | 北京云迹科技有限公司 | Information transferring method and system |
CN108831483A (en) * | 2018-09-07 | 2018-11-16 | 马鞍山问鼎网络科技有限公司 | A kind of artificial intelligent voice identifying system |
-
2019
- 2019-04-03 CN CN201910265952.9A patent/CN110051289B/en active Active
- 2019-04-03 CN CN202210225162.XA patent/CN114468898B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001057853A1 (en) * | 2000-01-31 | 2001-08-09 | Japan Science And Technology Corporation | Robot auditory device |
CN104934033A (en) * | 2015-04-21 | 2015-09-23 | 深圳市锐曼智能装备有限公司 | Control method of robot sound source positioning and awakening identification and control system of robot sound source positioning and awakening identification |
CN106328132A (en) * | 2016-08-15 | 2017-01-11 | 歌尔股份有限公司 | Voice interaction control method and device for intelligent equipment |
CN108814449A (en) * | 2018-07-30 | 2018-11-16 | 马鞍山问鼎网络科技有限公司 | A kind of artificial intelligence sweeping robot control method based on phonetic order |
CN109346069A (en) * | 2018-09-14 | 2019-02-15 | 北京赋睿智能科技有限公司 | A kind of interactive system and device based on artificial intelligence |
CN109377991A (en) * | 2018-09-30 | 2019-02-22 | 珠海格力电器股份有限公司 | Intelligent equipment control method and device |
CN109358751A (en) * | 2018-10-23 | 2019-02-19 | 北京猎户星空科技有限公司 | A kind of wake-up control method of robot, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114468898B (en) | 2023-05-05 |
CN110051289A (en) | 2019-07-26 |
CN114468898A (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110051289B (en) | Voice control method and device for sweeping robot, robot and medium | |
CN110623606B (en) | Cleaning robot and control method thereof | |
CN110495821B (en) | Cleaning robot and control method thereof | |
CN110136704B (en) | Robot voice control method and device, robot and medium | |
AU2018100726A4 (en) | Automatic cleaning device and cleaning method | |
TWI821992B (en) | Cleaning robot and control method thereof | |
CN112205937B (en) | Automatic cleaning equipment control method, device, equipment and medium | |
CN109932726B (en) | Robot ranging calibration method and device, robot and medium | |
CN109920425B (en) | Robot voice control method and device, robot and medium | |
CN111990930B (en) | Distance measuring method, distance measuring device, robot and storage medium | |
CN210931183U (en) | Cleaning robot | |
CN210931181U (en) | Cleaning robot | |
CN210931182U (en) | Cleaning robot | |
CN217792839U (en) | Automatic cleaning equipment | |
TW202305534A (en) | Robot obstacle avoidance method and related device, robot, storage medium and electronic equipment | |
CN210673215U (en) | Multi-light-source detection robot | |
CN116942017A (en) | Automatic cleaning device, control method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220424 Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing Patentee after: Beijing Stone Innovation Technology Co.,Ltd. Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085 Patentee before: Beijing Roborock Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |