CN106796790B - Robot voice instruction recognition method and related robot device - Google Patents

Robot voice instruction recognition method and related robot device Download PDF

Info

Publication number
CN106796790B
CN106796790B CN201680002660.0A CN201680002660A CN106796790B CN 106796790 B CN106796790 B CN 106796790B CN 201680002660 A CN201680002660 A CN 201680002660A CN 106796790 B CN106796790 B CN 106796790B
Authority
CN
China
Prior art keywords
robot
current
accuracy
preset
recognition accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680002660.0A
Other languages
Chinese (zh)
Other versions
CN106796790A (en
Inventor
骆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Holdings Co Ltd filed Critical Cloudminds Shenzhen Holdings Co Ltd
Publication of CN106796790A publication Critical patent/CN106796790A/en
Application granted granted Critical
Publication of CN106796790B publication Critical patent/CN106796790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the invention discloses a method for identifying a voice command of a robot and a related robot device, wherein the method comprises the following steps: after detecting that a user sends out voice, and when the current voice instruction recognition accuracy is lower than a preset accuracy threshold, adjusting the working state of the robot until the adjusted voice instruction recognition accuracy of the robot reaches the preset accuracy threshold. Through the mode, the robot device can receive the voice command more accurately without stopping the action, and the voice command recognition accuracy and the working efficiency are well coordinated.

Description

Robot voice instruction recognition method and related robot device
Technical Field
The embodiment of the invention relates to the field of robots, in particular to a method for recognizing a voice command of a robot and a related robot device.
Background
When a robot which can move and has mechanical action capability works, noise is inevitably generated due to the operation of a steering engine and/or a motor or the operation or cleaning of other objects (such as door opening operation, dish washing operation, floor sweeping operation and the like), and if a user sends a voice instruction to the robot in the process, the noise inevitably has certain influence on the accuracy of the voice recognition of the robot. Even if a noise reduction algorithm is adopted, only steady-state noise can be filtered, and variable and unpredictable noise in the working process of the robot is difficult to filter. Another method is that the robot stops the current action after detecting that the user utters voice, and resumes the previous action after receiving the voice command, but this inevitably reduces the work efficiency.
In view of the above, it is an urgent problem in the art to overcome the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The technical problem mainly solved by the embodiment of the invention is to provide a robot voice instruction recognition method and a related robot device, which can better coordinate the voice instruction recognition accuracy and the working efficiency.
In order to solve the above technical problem, one technical solution adopted by the embodiment of the present invention is: a method for recognizing robot voice commands is provided, which comprises the following steps: after detecting that a user sends out voice, and when the current voice instruction recognition accuracy is lower than a preset accuracy threshold, adjusting the working state of the robot until the adjusted voice instruction recognition accuracy of the robot reaches the preset accuracy threshold.
Wherein, the method further comprises: acquiring the current noise level; and determining the recognition accuracy of the current voice command according to the corresponding relation between the noise level and the recognition accuracy according to the current noise level.
The preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold.
Wherein, the adjustment of the working state of the robot to the robot that the accuracy of the adjusted voice command recognition reaches the preset accuracy threshold comprises: determining the adjustment range of the working state of the robot according to the difference value between the current voice command recognition accuracy and a preset accuracy threshold; and adjusting the action of the robot according to the determined adjustment amplitude, so that the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold.
Wherein, the method further comprises: and when the recognition accuracy of the adjusted voice command of the robot does not reach a preset accuracy threshold, informing the user that the current environment noise is larger or the robot approaches to the user.
When the adjusted voice instruction recognition accuracy rate of the robot does not reach the preset accuracy rate threshold value, the user is informed that the current environmental noise is larger or the user approaches to the robot, namely, when the adjusted voice instruction recognition accuracy rate of the robot does not reach the preset accuracy rate threshold value and the priority of the currently executed task is lower than the preset priority, the user is informed that the current environmental noise is larger or the user approaches to the robot.
Wherein, the adjustment of the working state of the robot to the robot that the accuracy of the adjusted voice command recognition reaches the preset accuracy threshold comprises: and step-by-step adjusting the action of the robot according to a preset adjustment amplitude until the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold.
Wherein, the method further comprises: when the robot stops acting after the working state is adjusted and the voice command recognition accuracy rate still does not reach the preset accuracy rate threshold value, the user is informed that the current environment noise is larger or the robot approaches to the user.
Wherein, the working state of the adjusting robot comprises one or more of the following: slowing down the action speed, reducing the rotating speed of the motor and closing the non-human voice frequency range steering engine.
In order to solve the above technical problem, another technical solution adopted by the embodiment of the present invention is: provided is a robot voice instruction recognition device including: and the adjusting module is used for adjusting the working state of the robot until the adjusted voice instruction recognition accuracy of the robot reaches a preset accuracy threshold value after the user sends voice and the current voice instruction recognition accuracy is lower than the preset accuracy threshold value.
Wherein, above-mentioned device still includes: the acquisition module is used for acquiring the current noise level; and the accuracy determining module is used for determining the recognition accuracy of the current voice command according to the current noise level and the corresponding relation between the noise level and the recognition accuracy.
Wherein, above-mentioned adjustment module includes: the amplitude determining unit is used for determining the adjustment amplitude of the working state of the robot according to the difference value between the current voice command recognition accuracy and a preset accuracy threshold; and the first adjusting unit is used for adjusting the action of the robot according to the determined adjusting amplitude, so that the recognition accuracy of the voice command of the robot after adjustment reaches a preset accuracy threshold.
In order to solve the above technical problem, another technical solution adopted by the embodiment of the present invention is: provided is a robot device including:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a program of instructions executable by the at least one processor to cause the at least one processor to:
after detecting that a user sends out voice, and when the current voice instruction recognition accuracy is lower than a preset accuracy threshold, adjusting the working state of the robot until the adjusted voice instruction recognition accuracy of the robot reaches the preset accuracy threshold.
Compared with the prior art, the implementation mode of the invention has the beneficial effects that:
in the embodiment of the invention, after the voice instruction sent by the user is detected, and when the current voice instruction recognition accuracy is lower than the preset accuracy threshold, the robot device adjusts the working state of the robot device, for example, the action speed is slowed down, so that the adjusted voice instruction recognition accuracy reaches the preset accuracy threshold, the robot device can receive the voice instruction more accurately without stopping the action, and the voice instruction recognition accuracy and the working efficiency are better coordinated.
Drawings
FIG. 1 is a flow diagram of one embodiment of a method of robotic voice command recognition of the present invention;
FIG. 2 is a schematic diagram of a robot voice command recognition apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of one embodiment of the robot apparatus of the present invention.
Detailed Description
The embodiment of the invention provides a robot voice instruction recognition method, a robot voice instruction recognition device and a robot device.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for recognizing a robot voice command according to an embodiment of the present invention. As shown in fig. 1, the present embodiment includes:
step 101: after detecting that a user sends out voice, judging whether the recognition accuracy of the current voice instruction is lower than a preset accuracy threshold;
the execution main body of the present embodiment may specifically be a robot apparatus. The robot device can continuously detect whether the user utters voice in the task execution process, for example, when the user voice is captured, the user is considered to be uttering voice, and then whether the current voice instruction recognition accuracy is lower than a preset accuracy threshold is judged.
Preferably, the preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold. Thus, the higher priority task is less likely to be disturbed by a voice instruction issued by the user.
For example, the task priority of the robot is divided into five levels (actually, the task priority can be divided into any level) of 1, 2, 3, 4 and 5, wherein the level 1 is the highest priority and the level 5 is the lowest priority. The priority of the voice command may be defined as between any two priorities, such as 1.5, 2.5, 3.5, 4.5, to ensure that the voice command is not at the same priority as any task. The accuracy threshold values preset according to the task priorities are x1, x2, x3, x4 and x5 respectively, and correspond to 1-5 levels of the task priorities, wherein the accuracy threshold value x1 is the lowest, and the accuracy threshold value x5 is the highest.
The current speech instruction recognition accuracy may be determined from the current noise, for example: acquiring the current noise level; and determining the recognition accuracy of the current voice command according to the current noise level and the corresponding relation between the locally stored noise level and the recognition accuracy. For example, the noise level 1 is 10dB or more and less than 20dB, the noise level 2 is 20dB or more and less than 30dB, and the higher the noise level is, the lower the voice command recognition accuracy is.
Or after detecting that the user sends a voice command, trying to recognize the expression content of the voice command once, and if the expression content cannot be recognized, judging that the recognition accuracy of the current voice command is lower than a preset accuracy threshold. For example, in step 101, after detecting that the user issues a voice instruction, before determining whether the current voice instruction recognition accuracy is lower than a preset accuracy threshold, the method may further include: the content of the expression of the user voice instruction is recognized. At this time, the determining whether the current speech instruction recognition accuracy is lower than the preset accuracy threshold may specifically include: and when the expression content of the user voice instruction cannot be recognized, judging that the recognition accuracy of the current voice instruction is lower than a preset accuracy threshold.
Step 102: and when the recognition accuracy of the current voice command is judged to be lower than the preset accuracy threshold, adjusting the working state of the robot until the recognition accuracy of the voice command after the robot is adjusted reaches the preset accuracy threshold.
Specifically, the adjusting of the working state of the robot may include one or any of the following: the action speed is slowed down, the rotating speed of the motor is reduced, and the non-human sound frequency band steering engine is closed, so that the influence of the operation noise of the robot or an object on the voice command recognition accuracy rate is reduced, the current voice command recognition accuracy rate can be improved, and the robot device can correctly receive the voice command.
The corresponding relation between the voice command recognition accuracy difference and the adjustment range can be preset according to experience, so that the working state of the robot can be adjusted according to the difference. Therefore, in step 102, adjusting the working state of the robot until the recognition accuracy of the adjusted voice command of the robot reaches the preset accuracy threshold may specifically include: determining the adjustment range of the working state of the robot according to the difference value between the current voice command recognition accuracy and a preset accuracy threshold; and adjusting the action of the robot according to the determined adjustment amplitude, so that the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold.
Under normal conditions, after the working state of the robot is adjusted according to the difference, the recognition accuracy of the voice command of the robot can reach a preset accuracy threshold, if the recognition accuracy cannot reach the preset accuracy threshold, the current noise is judged to come from the external environment instead of self movement, and a user can be informed that the current environment noise is large, the recognition is possibly incorrect, or the user approaches to receive the voice command better. Preferably, under the condition that the adjusted voice instruction recognition accuracy does not reach the preset accuracy threshold and the priority of the current execution task is lower than the preset priority, the user is informed that the current environmental noise is larger or the current environmental noise is closer to the user, so that the execution task with higher priority is less likely to be disturbed by the voice instruction sent by the user.
Of course, the step-by-step adjustment of the robot action may also be performed, that is, the step-by-step adjustment of the working state of the robot until the recognition accuracy of the adjusted voice command of the robot reaches the preset accuracy threshold may include: and step-by-step adjusting the action of the robot according to a preset adjustment amplitude until the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold. For example, the corresponding action speed and amplitude of the human voice frequency band can be influenced by adjusting according to step decrement by 20%, the current noise level is obtained again after each decrement, whether the recognition accuracy of the current voice command is lower than a preset accuracy threshold value or not is judged, and if the recognition accuracy of the current voice command is lower than the preset accuracy threshold value, the current action speed, amplitude and the like are kept until the voice command is received completely.
If the step is decreased progressively until the robot stops moving, and the voice command recognition accuracy is still lower than the preset accuracy threshold, the noise is judged to come from the external environment instead of the self-movement, so that the user can be informed that the current environment is high in noise and possibly causes the incorrect recognition condition, or approach the user to better receive the voice command.
Step 103: and when the recognition accuracy of the current voice command is judged to be not lower than the preset accuracy threshold, continuously receiving the voice sent by the user.
In this embodiment, after detecting that the user sends the voice command, and when the current voice command recognition accuracy is lower than the preset accuracy threshold, the robot apparatus adjusts its own working state, for example, slows down the motion speed, so that the adjusted voice command recognition accuracy reaches the preset accuracy threshold, and thus the robot apparatus can receive the voice command more accurately without stopping the motion, and the voice command recognition accuracy and the working efficiency are better coordinated. If the adjusting speed of the robot is fast enough, convergence can be completed quickly, the current task executed by a user is basically not influenced, and the experience is good; if the adjustment speed is slower, the convergence time may be longer, but still better than if the method were not applied.
It should be noted that, in some embodiments, the determination of whether the current voice instruction recognition accuracy is lower than the preset accuracy threshold may not be performed after the voice instruction sent by the user is detected. For example, the robot device may determine whether the current voice instruction recognition accuracy is lower than a preset accuracy threshold after starting a new task, store the determination result locally, and read the determination result after detecting that the user has issued a voice instruction.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a robot voice command recognition apparatus according to an embodiment of the present invention. As shown in fig. 2, the present embodiment includes:
a detection module 201, configured to detect a voice instruction sent by a user;
the detection module may continuously detect whether the user utters a voice command, for example, it may be considered that the user is uttering a voice command when capturing the user's voice.
The judging module 202 is configured to judge whether the current speech instruction recognition accuracy is lower than a preset accuracy threshold;
preferably, the preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold. Thus, the higher priority task is less likely to be disturbed by a voice instruction issued by the user.
The current speech instruction recognition accuracy may be determined based on the current noise. For example, the robot voice instruction recognition may further include: the acquisition module is used for acquiring the current noise level; and the accuracy determining module is used for determining the recognition accuracy of the current voice command according to the current noise level and the corresponding relation between the noise level and the recognition accuracy.
In this embodiment, after the detection module 201 detects that the user sends a voice instruction, the determination module 202 is triggered to execute an operation.
The adjusting module 203 is configured to, after the detecting module detects that the user sends the voice instruction, and when the judging module judges that the current voice instruction recognition accuracy is lower than the preset accuracy threshold, adjust the working state of the robot until the adjusted voice instruction recognition accuracy of the robot reaches the preset accuracy threshold.
The corresponding relation between the voice command recognition accuracy difference and the adjustment range can be preset according to experience, so that the working state of the robot can be adjusted according to the difference. For example, the adjusting module 203 may include: the amplitude determining unit is used for determining the adjustment amplitude of the working state of the robot according to the difference value between the current voice command recognition accuracy and a preset accuracy threshold; and the first adjusting unit is used for adjusting the action of the robot according to the determined adjusting amplitude, so that the recognition accuracy of the voice command of the robot after adjustment reaches a preset accuracy threshold.
The movement of the robot can also be adjusted stepwise. For example, the adjusting module 203 may include: and the second adjusting unit is used for adjusting the action of the robot step by step according to the preset adjusting amplitude until the recognition accuracy of the voice command of the robot after adjustment reaches a preset accuracy threshold.
Specifically, the adjusting module 203 may include one or any of the following: the action speed is slowed down, the rotating speed of the motor is reduced, and the non-human sound frequency band steering engine is closed, so that the influence of the operation noise of the robot or an object on the voice command recognition accuracy rate is reduced, the current voice command recognition accuracy rate can be improved, and the robot device can correctly receive the voice command.
In this embodiment, after the detection module detects that the user sends the voice command, and when the determination module determines that the current voice command recognition accuracy is lower than the preset accuracy threshold, the adjustment module adjusts the working state of the adjustment module, for example, slows down the action speed, so that the adjusted voice command recognition accuracy reaches the preset accuracy threshold, and thus the robot device can receive the voice command more accurately without stopping the action, and the voice command recognition accuracy and the working efficiency are well coordinated.
In some embodiments, the determining module 202 may not perform the operation after the detecting module 201 detects that the user issues the voice command. For example, the determining module 202 may determine whether the current speech instruction recognition accuracy is lower than a preset accuracy threshold after the robot starts a new task, and locally store the determination result, and after the detecting module 201 detects that the user sends a speech instruction, the robot speech instruction recognition device directly reads the determination result.
In some embodiments, the robot voice command recognition apparatus may not include the detection module 201 and the determination module 202, and the external device triggers the adjustment module 203 to adjust the working state of the robot after the user utters the voice and when the current voice command recognition accuracy is lower than the preset accuracy threshold.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a robot apparatus according to an embodiment of the present invention. As shown in fig. 3, the robot apparatus 300 includes:
at least one processor 310, one processor 310 being exemplified in fig. 3; and a memory 320 communicatively coupled to the at least one processor 310; the memory stores a program of instructions executable by the at least one processor, and the program of instructions is executed by the at least one processor to enable the at least one processor to perform the method of robot voice instruction recognition.
The processor 310 and the memory 320 may be connected by a bus or other means, such as the bus connection shown in fig. 3.
The memory 320 is a non-volatile computer-readable storage medium and can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the method for recognizing robot voice instructions in the embodiments of the present application. The processor 310 executes various functional applications and data processing of the robot device, i.e., implements the robot voice instruction recognition method applied to the robot device of the above-described method embodiments, by executing the nonvolatile software program, instructions, and modules stored in the memory 320.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the above-described robot voice instruction recognition method, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 320 may include memory located remotely from the processor 310, which may be connected to the robotic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 320 and, when executed by the one or more processors 310, perform the method of robot voice instruction recognition applied to a robotic device in any of the method embodiments described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above described systems, apparatuses and units may refer to the corresponding processes in the above described method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A method for recognizing a robot voice command, comprising:
determining the recognition accuracy rate of the current voice command according to the current noise, wherein the current noise comprises the noise generated in the working process of the robot;
the method comprises the steps that after a voice sent by a user is detected, the priority of a current task executed by the robot is obtained, a preset accuracy threshold value is determined according to the priority of the task executed, and when the current voice instruction recognition accuracy is lower than the preset accuracy threshold value, the working state of the robot is adjusted to reduce noise generated in the working process of the robot until the voice instruction recognition accuracy of the robot after adjustment reaches the preset accuracy threshold value;
the preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold.
2. The method according to claim 1, wherein determining the current speech command recognition accuracy based on the current noise comprises:
acquiring the current noise level;
and determining the recognition accuracy of the current voice command according to the corresponding relation between the noise level and the recognition accuracy according to the current noise level.
3. The method of claim 1, wherein the adjusting the working state of the robot until the robot reaches a preset accuracy threshold after the adjusted voice command recognition accuracy comprises:
determining the adjustment range of the working state of the robot according to the difference value between the current voice command recognition accuracy and the preset accuracy threshold;
and adjusting the action of the robot according to the determined adjustment amplitude, so that the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold.
4. The method of claim 3, further comprising:
and when the recognition accuracy of the adjusted voice command of the robot does not reach a preset accuracy threshold, informing the user that the current environment noise is larger or the robot approaches to the user.
5. The method according to claim 4, wherein when the robot does not reach the preset accuracy threshold after the adjusted voice command recognition accuracy, informing the user that the current environmental noise is larger or approaching the user is performed by:
and when the adjusted voice instruction recognition accuracy of the robot does not reach a preset accuracy threshold and the priority of the currently executed task is lower than a preset priority, informing a user that the current environment noise is larger or the robot approaches to the user.
6. The method of claim 1, wherein the adjusting the working state of the robot until the robot reaches a preset accuracy threshold after the adjusted voice command recognition accuracy comprises:
and step-by-step adjusting the action of the robot according to a preset adjustment amplitude until the recognition accuracy of the adjusted voice command of the robot reaches a preset accuracy threshold.
7. The method of claim 6, further comprising:
when the robot stops acting after the working state is adjusted and the voice command recognition accuracy rate still does not reach the preset accuracy rate threshold value, the user is informed that the current environment noise is larger or the robot approaches to the user.
8. The method according to any one of claims 1 to 7, wherein the adjusting the working state of the robot comprises one or any of the following: slowing down the action speed, reducing the rotating speed of the motor and closing the non-human voice frequency range steering engine.
9. A robot voice command recognition apparatus, comprising:
the accuracy rate determining module is used for determining the accuracy rate of the recognition of the current voice command according to the current noise, wherein the current noise comprises the noise generated in the working process of the robot;
the adjusting module is used for acquiring the priority of the current task executed by the robot after a user sends out voice, determining a preset accuracy threshold according to the priority of the task executed, and adjusting the working state of the robot to reduce noise generated in the working process of the robot when the current voice instruction recognition accuracy is lower than the preset accuracy threshold until the voice instruction recognition accuracy of the robot after adjustment reaches the preset accuracy threshold;
the preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold.
10. The apparatus of claim 9, wherein the accuracy determination module is specifically configured to:
acquiring the current noise level;
and determining the recognition accuracy of the current voice command according to the corresponding relation between the noise level and the recognition accuracy according to the current noise level.
11. The apparatus of claim 9, wherein the adjustment module comprises:
the amplitude determining unit is used for determining the adjustment amplitude of the working state of the robot according to the difference value between the current voice instruction recognition accuracy and the preset accuracy threshold;
and the first adjusting unit is used for adjusting the action of the robot according to the determined adjusting amplitude, so that the recognition accuracy of the voice command of the robot after adjustment reaches a preset accuracy threshold.
12. The device according to any one of claims 9-11, wherein the adjusting of the working state of the robot comprises one or any of the following: slowing down the action speed, reducing the rotating speed of the motor and closing the non-human voice frequency range steering engine.
13. A robotic device, comprising:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a program of instructions executable by the at least one processor, the program of instructions being executable by the at least one processor to cause the at least one processor to:
determining the recognition accuracy rate of the current voice command according to the current noise, wherein the current noise comprises the noise generated in the working process of the robot;
the method comprises the steps that after a voice sent by a user is detected, the priority of a current task executed by the robot is obtained, a preset accuracy threshold value is determined according to the priority of the task executed, and when the current voice instruction recognition accuracy is lower than the preset accuracy threshold value, the working state of the robot is adjusted to reduce noise generated in the working process of the robot until the voice instruction recognition accuracy of the robot after adjustment reaches the preset accuracy threshold value;
the preset accuracy threshold corresponds to the priority of the current task executed by the robot; the higher the priority of executing a task, the lower the corresponding accuracy threshold.
CN201680002660.0A 2016-11-16 2016-11-16 Robot voice instruction recognition method and related robot device Active CN106796790B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/106118 WO2018090252A1 (en) 2016-11-16 2016-11-16 Voice instruction recognition method for robot, and related robot device

Publications (2)

Publication Number Publication Date
CN106796790A CN106796790A (en) 2017-05-31
CN106796790B true CN106796790B (en) 2020-11-10

Family

ID=58952335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680002660.0A Active CN106796790B (en) 2016-11-16 2016-11-16 Robot voice instruction recognition method and related robot device

Country Status (2)

Country Link
CN (1) CN106796790B (en)
WO (1) WO2018090252A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107820619B (en) * 2017-09-21 2019-12-10 达闼科技(北京)有限公司 hierarchical interaction decision-making method, interaction terminal and cloud server
CN111968643A (en) * 2017-09-29 2020-11-20 赵成智 Intelligent recognition method, robot and computer readable storage medium
CN109994111B (en) * 2019-02-26 2021-11-23 维沃移动通信有限公司 Interaction method, interaction device and mobile terminal
CN110288991B (en) * 2019-06-18 2022-02-18 北京梧桐车联科技有限责任公司 Voice recognition method and device
CN114536363B (en) * 2022-02-25 2024-06-04 杭州萤石软件有限公司 Robot control method, robot system and robot
CN115171284B (en) * 2022-07-01 2023-12-26 国网汇通金财(北京)信息科技有限公司 Senior caring method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1397063A (en) * 2000-11-27 2003-02-12 皇家菲利浦电子有限公司 Method for control of unit comprising acoustic output device
CN101206857A (en) * 2006-12-19 2008-06-25 国际商业机器公司 Method and system for modifying speech processing arrangement
CN101510425A (en) * 2008-02-15 2009-08-19 株式会社东芝 Voice recognition apparatus and method for performing voice recognition
US8311820B2 (en) * 2010-01-28 2012-11-13 Hewlett-Packard Development Company, L.P. Speech recognition based on noise level
CN102831894A (en) * 2012-08-09 2012-12-19 华为终端有限公司 Command processing method, command processing device and command processing system
CN103065631A (en) * 2013-01-24 2013-04-24 华为终端有限公司 Voice identification method and device
US8438023B1 (en) * 2011-09-30 2013-05-07 Google Inc. Warning a user when voice input to a device is likely to fail because of background or other noise
CN103916875A (en) * 2014-04-24 2014-07-09 山东大学 Management and planning system of multi-class control terminals based on WIFI wireless network
CN103928026A (en) * 2014-05-12 2014-07-16 安徽江淮汽车股份有限公司 Automobile voice command acquiring and processing system and method
CN103971680A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Method and device for recognizing voices
CN104064185A (en) * 2013-03-18 2014-09-24 联想(北京)有限公司 Information processing method and system and electronic device
CN104078040A (en) * 2014-06-26 2014-10-01 美的集团股份有限公司 Voice recognition method and system
CN104505092A (en) * 2014-12-10 2015-04-08 广东美的制冷设备有限公司 Voice control method and system of air conditioner
CN104756185A (en) * 2012-11-05 2015-07-01 三菱电机株式会社 Speech recognition device
US9293130B2 (en) * 2008-05-02 2016-03-22 Nuance Communications, Inc. Method and system for robust pattern matching in continuous speech for spotting a keyword of interest using orthogonal matching pursuit

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3493849B2 (en) * 1995-11-20 2004-02-03 ソニー株式会社 Voice recognition device
KR101622604B1 (en) * 2009-05-19 2016-05-31 엘지전자 주식회사 Mobile terminal and method for processing process thereof
GB2526980B (en) * 2013-07-10 2017-04-12 Cirrus Logic Int Semiconductor Ltd Sensor input recognition

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1397063A (en) * 2000-11-27 2003-02-12 皇家菲利浦电子有限公司 Method for control of unit comprising acoustic output device
CN101206857A (en) * 2006-12-19 2008-06-25 国际商业机器公司 Method and system for modifying speech processing arrangement
CN101510425A (en) * 2008-02-15 2009-08-19 株式会社东芝 Voice recognition apparatus and method for performing voice recognition
US9293130B2 (en) * 2008-05-02 2016-03-22 Nuance Communications, Inc. Method and system for robust pattern matching in continuous speech for spotting a keyword of interest using orthogonal matching pursuit
US8311820B2 (en) * 2010-01-28 2012-11-13 Hewlett-Packard Development Company, L.P. Speech recognition based on noise level
US8438023B1 (en) * 2011-09-30 2013-05-07 Google Inc. Warning a user when voice input to a device is likely to fail because of background or other noise
CN102831894A (en) * 2012-08-09 2012-12-19 华为终端有限公司 Command processing method, command processing device and command processing system
CN104756185A (en) * 2012-11-05 2015-07-01 三菱电机株式会社 Speech recognition device
CN103065631A (en) * 2013-01-24 2013-04-24 华为终端有限公司 Voice identification method and device
CN103971680A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Method and device for recognizing voices
CN104064185A (en) * 2013-03-18 2014-09-24 联想(北京)有限公司 Information processing method and system and electronic device
CN103916875A (en) * 2014-04-24 2014-07-09 山东大学 Management and planning system of multi-class control terminals based on WIFI wireless network
CN103928026A (en) * 2014-05-12 2014-07-16 安徽江淮汽车股份有限公司 Automobile voice command acquiring and processing system and method
CN104078040A (en) * 2014-06-26 2014-10-01 美的集团股份有限公司 Voice recognition method and system
CN104505092A (en) * 2014-12-10 2015-04-08 广东美的制冷设备有限公司 Voice control method and system of air conditioner

Also Published As

Publication number Publication date
CN106796790A (en) 2017-05-31
WO2018090252A1 (en) 2018-05-24

Similar Documents

Publication Publication Date Title
CN106796790B (en) Robot voice instruction recognition method and related robot device
CN105116994A (en) Intelligent robot tracking method and tracking device based on artificial intelligence
US20190071816A1 (en) Device and method for controlling automatic opening and closing of upper cover of washing machine
EP3130696A1 (en) Washing machine and anti-pinching control method for electric door thereof
EP3133457A1 (en) Obstacle avoidance walking method of self-moving robot
CN109108974B (en) Robot avoidance method and device, background server and storage medium
US9477217B2 (en) Using visual cues to improve appliance audio recognition
CN105116920A (en) Intelligent robot tracking method and apparatus based on artificial intelligence and intelligent robot
CN109506347B (en) Control method and system for synchronous operation of wind sweeping motor and air conditioner
CN107166671B (en) Air conditioner and constant air volume control method, control device and control system for indoor fan of air conditioner
CN107581976B (en) Cleaning method and device and cleaning robot
BR112016003131B1 (en) METHOD AND APPARATUS FOR SENDING AN ALERT MESSAGE
CN108375911B (en) Equipment control method and device, storage medium and equipment
CN110657561B (en) Air conditioner and voice instruction recognition method, control device and readable storage medium thereof
CN109347708B (en) Voice recognition method and device, household appliance, cloud server and medium
CN112731821B (en) Equipment movement method and electronic equipment
KR20200046262A (en) Speech recognition processing method for noise generating working device and system thereof
US20160179101A1 (en) Control method and system for adjusting relative position of mobile household device with respect to human
CN111493750A (en) Control method and device of sweeping robot and electronic equipment
WO2023138096A1 (en) Control method and apparatus for air conditioner, and air conditioner and storage medium
CN108038947B (en) Intelligent door lock system based on Bluetooth
WO2017041594A1 (en) Robot operating state switching method and system
CN104239130A (en) Control method of operating instruction response by human-computer interaction interface and terminal
CN109714233B (en) Home control method and corresponding routing equipment
CN111028831A (en) Voice awakening method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210125

Address after: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: CLOUDMINDS (SHENZHEN) HOLDINGS Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.