CN113359538A - Voice control robot - Google Patents

Voice control robot Download PDF

Info

Publication number
CN113359538A
CN113359538A CN202010146715.3A CN202010146715A CN113359538A CN 113359538 A CN113359538 A CN 113359538A CN 202010146715 A CN202010146715 A CN 202010146715A CN 113359538 A CN113359538 A CN 113359538A
Authority
CN
China
Prior art keywords
voice
module
scanning
unit
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010146715.3A
Other languages
Chinese (zh)
Inventor
林家仁
许世昌
赖俊吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teco Electric and Machinery Co Ltd
Original Assignee
Teco Electric and Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teco Electric and Machinery Co Ltd filed Critical Teco Electric and Machinery Co Ltd
Priority to CN202010146715.3A priority Critical patent/CN113359538A/en
Publication of CN113359538A publication Critical patent/CN113359538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Abstract

The invention provides a voice control robot, which comprises a scanning module, a judging module, an analyzing module and a control module. The scanning module has a scanning range for scanning out a scanning information corresponding to the scanning range. The judging module is used for receiving and judging the scanning information, and generating an analysis starting signal when judging that a user facing the voice control robot exists in the scanning range. The analysis module is used for receiving and analyzing an audio signal after receiving the analysis starting signal, and generating a voice starting signal after judging that the audio signal is a wake-up word signal of a wake-up word spoken by a user. The control module is used for controlling the voice control robot to start a voice mode when receiving the voice starting signal.

Description

Voice control robot
Technical Field
The invention relates to a robot, in particular to a voice control robot.
Background
With the advancement of science and technology, robots have been applied to more and more industries and fields, such as restaurants, banks, restaurants, factories, warehouses, and the like. Most of the robots in the prior art still rely on manual instruction input by users, and some robots can be controlled by users by means of voice instructions.
In the prior art, the voice-controlled robot needs to analyze the audio signals of the environment to determine whether the audio signals contain the voice command spoken by the user. However, this may cause the voice-controlled robot to continuously consume resources to analyze audio signals that may not be related at all, such as the chat sound of the people belonging to the environment, thereby causing unnecessary waste of the electric energy stored in the voice-controlled robot. Therefore, the voice-controlled robot in the related art has room for improvement.
Disclosure of Invention
In view of the problems in the prior art that the voice control robot needs to analyze the audio signal of the environment, which causes resource occupation, waste of stored electric energy and extension thereof. It is a primary object of the present invention to provide a voice-controlled robot to solve at least one of the problems of the prior art.
The present invention provides a voice-controlled robot, which comprises a scanning module, a judging module, an analyzing module and a control module.
The scanning module has a scanning range for scanning out a scanning information corresponding to the scanning range. The judging module is electrically connected with the scanning module and used for receiving and judging the scanning information and generating an analysis starting signal when judging that a user facing the voice control robot exists in the scanning range. The analysis module is electrically connected with the judgment module and used for receiving and analyzing an audio signal after receiving the analysis starting signal and generating a voice starting signal when judging that the audio signal is a wake-up word signal of a wake-up word spoken by a user. The control module is electrically connected with the analysis module and used for controlling the voice control robot to start a voice mode when receiving the voice starting signal. The voice control robot executes an instruction work corresponding to the voice instruction according to the voice instruction spoken by the user in the voice mode.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is that the determining module in the voice-controlled robot comprises a shank shape determining unit for determining that a user facing the voice-controlled robot exists in the scanning range when determining that two similar semi-circular arcs adjacent to each other, having the same shape and the same intensity value exist in the scanning information.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is that the judging module in the voice-controlled robot further comprises a distance estimating unit electrically connected to the shank shape judging unit for calculating a current distance between the user and the voice-controlled robot by using the two types of semi-circular arcs.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is that the judging module in the voice control robot further comprises a signal judging unit, the signal judging unit is electrically connected to the distance estimating unit for receiving the current distance and generating an analysis starting signal when judging that the current distance is smaller than a set distance.
Based on the above-mentioned necessary technical means, an accessory technical means derived from the present invention is that the analysis module in the voice-controlled robot includes a recording unit, and the recording unit is used for recording an audio signal of a preset time when receiving the analysis start signal.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is to make the analysis module in the voice control robot further comprise a frequency analysis unit, wherein the frequency analysis unit is electrically connected to the recording unit and is used for receiving and comparing the audio signal with the wakeup word signal, and when the audio signal is compared to be in accordance with the wakeup word signal, determining that the user speaks the wakeup word.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is to make the analysis module in the voice control robot further comprise a triggering unit, the triggering unit is electrically connected with the recording unit and the frequency analysis unit, and when it is determined that a voice frequency in the audio signal reaches a triggering frequency of the wakeup word signal, the audio signal is compared with the wakeup word signal by the frequency analysis unit.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is that the analysis module in the voice-controlled robot further comprises an interval audio setting unit, wherein the interval audio setting unit is electrically connected with the recording unit and the frequency analysis unit, and is used for inserting a plurality of sound insulation frequencies with frequencies substantially equal to zero into the audio signal, and comparing a modified audio signal inserted with the interval audio with the awakening word signal through the frequency analysis unit.
Based on the above-mentioned necessary technical means, an accessory technical means derived by the present invention is that the scanning module in the voice-controlled robot comprises a laser scanning unit, and the laser scanning unit is used for scanning the scanning information corresponding to the scanning range.
In view of the above, the voice-controlled robot provided by the present invention utilizes the determining module to determine whether there is a user facing the voice-controlled robot, and then utilizes the analyzing module to further analyze whether the user utters the wake-up word, and after the two modules confirm, the voice mode is turned on by the controlling module. In addition, the judging module can further judge whether the user is within the set distance, and the analyzing module can further judge whether the audio signal reaches the trigger frequency, and then the analysis of the awakening word signal and the audio signal is carried out, so that the user is ensured to start the voice mode after facing the voice control robot and speaking the awakening word.
Drawings
FIG. 1 is a block diagram of a voice-controlled robot according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart showing the analysis of the analysis module of the voice-controlled robot according to the preferred embodiment of the present invention;
FIG. 3 is a schematic diagram showing a scan of a voice-controlled robot according to the present invention;
FIG. 4 is a diagram illustrating the scan information of FIG. 3;
FIG. 5 is a schematic diagram showing another scan of a voice-controlled robot provided by the present invention;
FIG. 6 is a diagram showing the scan information of FIG. 5;
FIG. 7 is a schematic diagram showing the analysis module of the voice-controlled robot provided by the present invention analyzing an audio signal;
FIG. 8 is another schematic diagram showing the analysis module of the voice-controlled robot provided by the present invention analyzing an audio signal;
FIG. 9 is a schematic diagram illustrating a voice-controlled robot according to a preferred embodiment of the present invention executing a command job; and
fig. 10 is a schematic diagram illustrating the voice-controlled robot according to the preferred embodiment of the present invention executing another instruction task.
Description of the reference numerals
1: voice control robot
11: scanning module
111 laser scanning unit
12: judging module
121 calf shape determination unit
122 distance estimation unit
123 signal judging unit
13 analysis module
131 recording unit
132 frequency analysis Unit
133 trigger unit
134 interval audio frequency setting unit
Control module 14
AS scanning range
B1, B2 Block
D current distance
IS scanning information
L1, L2 left shank
O1, O2 speech commands
P is the center point
R1, R2 right shank
S1, S2, S3 and S4 similar to semi-circular arc
SP1, SP2 Interval Audio
SS audio signal
SS1, SS2, SS3, SS4 audio blocks
TF trigger frequency
U1, U2 users
Detailed Description
The following describes in more detail embodiments of the present invention with reference to the schematic drawings. Advantages and features of the present invention will become apparent from the following description and claims. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Referring to fig. 1 and fig. 2, fig. 1 is a block diagram of a voice-controlled robot according to a preferred embodiment of the present invention; and, fig. 2 is an analysis flowchart showing an analysis module of the voice-controlled robot according to the preferred embodiment of the present invention. As shown in the figure, a voice-controlled robot 1 includes a scanning module 11, a determining module 12, an analyzing module 13 and a control module 14.
The scanning module 11 has a scanning range for scanning a scanning message corresponding to the scanning range. In the present embodiment, the scanning module 11 includes a laser scanning unit 111. The laser scanning unit 111 is used for scanning the scanning information.
The determining module 12 is electrically connected to the scanning module 11, and is configured to receive and determine the scanning information, and generate an analysis start signal when it is determined that a user facing the voice-controlled robot 1 exists within the scanning range.
In the present embodiment, the determining module 12 includes a shank shape determining unit 121, a distance estimating unit 122 and a signal determining unit 123.
The analysis module 13 is electrically connected to the determination module 12, and is configured to receive and analyze an audio signal after receiving the analysis start signal, and generate a voice start signal when determining that the audio signal is a wake-up word signal of a wake-up word spoken by a user.
In the present embodiment, the analysis module 13 includes a recording unit 131, a frequency analysis unit 132, a triggering unit 133 and an interval audio setting unit 134.
The analysis module 13 includes the following steps S101 to S104 after receiving the analysis start signal and before generating the voice start signal.
Step S101: the recording unit 131 records an audio signal.
The recording unit 131 records an audio signal for a predetermined time, which may be 4 seconds, 5 seconds, or 10 seconds. Generally, the source of the audio signal is a human voice of a scanning range, which may be a wake-up word, or a human voice unrelated to the wake-up word, such as: the sound of the chat.
Step S102: the frequency analysis unit 132 receives and analyzes the audio signal.
Step S103: the frequency analysis unit 132 compares whether the audio signal matches the wakeup word signal.
The frequency analysis unit 132 compares the audio signal with the wake-up word signal. In practice, the frequency analysis unit 132 converts the time domain decibels into the frequency domain decibels by using fourier transform, and performs frequency comparison. When the frequency analysis unit 132 compares that the frequency of the audio signal is the same as the wake-up word signal, it determines that the audio signal is the wake-up word spoken by the user.
Step S104: the analysis module 13 generates a voice activation signal
The control module 14 is electrically connected to the analysis module 13, and is configured to control the voice-controlled robot 1 to start a voice mode when receiving the voice start signal. Wherein, the voice control robot 1 executes a corresponding instruction work according to a voice instruction output by a user in the voice mode.
Next, please refer to fig. 1 to 8, wherein fig. 3 is a schematic scanning diagram of the voice-controlled robot provided by the present invention; FIG. 4 is a diagram illustrating the scan information of FIG. 3; FIG. 5 is a schematic diagram showing another scan of a voice-controlled robot provided by the present invention; FIG. 6 is a diagram showing the scan information of FIG. 5; FIG. 7 is a schematic diagram showing the analysis module of the voice-controlled robot provided by the present invention analyzing an audio signal; fig. 8 is another schematic diagram showing the analysis module of the voice-controlled robot provided by the present invention analyzing an audio signal.
The scanning module 11 has a scanning range AS and scans out the scanning information IS corresponding to the scanning range AS. The determination module 12 determines whether there is a user facing the voice-controlled robot 1 in the scanning range AS.
The shank shape determining unit 121 determines whether two similar semi-circular arcs adjacent to each other, having the same shape and the same intensity value exist in the scanning information IS, so AS to determine whether the user in the scanning range AS faces the voice-controlled robot 1.
In fig. 3, a user U1 does not face the voice-controlled robot 1 in the scanning area AS, but leans on his side. Therefore, the scanning information IS scanned by the scanning module 11 IS shown in fig. 4. Although the scanning information IS of fig. 4 includes two semi-circular arc-like shapes S1 and S2, the semi-circular arc-like shapes S1 and S2 are not adjacent to each other and have different shapes. Therefore, the shank shape determination unit 121 determines that the user U1 in the scanning range AS is not facing the voice-controlled robot 1, and the determination module 12 does not generate an analysis start signal.
More specifically, the scanning module 11 IS actually disposed at a height approximately aligned with the human calf, so that the semi-circular-arc-like shape S1 of the scanning information IS actually corresponds to a right calf R1 of the user U1, and the semi-circular-arc-like shape S2 of the scanning information IS actually corresponds to a left calf L1 of the user U1. Since the user U1 leans to the right and the left calf L1 IS closer to the voice-controlled robot 1, the semi-circular-like arc S2 in the scanned information IS closer to the voice-controlled robot 1 than the semi-circular-like arc S1. Furthermore, when the left foot of the user U1 is placed on the toe of the foot, the scanning module 11 actually scans the lower left leg L1 and lower left leg R1, i.e., closer to the left ankle. Therefore, the shape of the semi-circular-like arc S2 in the scanning information IS smaller than that of the semi-circular-like arc S1.
In addition, since the distance between the right lower leg R1 and the scan module 11 and the distance between the left lower leg L1 and the scan module 11 are not the same, and the scanned positions are also not the same, the intensity values of the semi-circular-like arc S1 and the semi-circular-like arc S2 are not the same in the scan information IS.
In fig. 5, a user U2 faces the voice-controlled robot 1 in the scanning area AS. Therefore, the scanning information IS scanned by the scanning module 11 IS shown in fig. 6. The scan information IS of fig. 6 also includes two semi-circular-like arcs S3, S4, wherein the semi-circular-like arc S3 corresponds to a right lower leg R2 of the corresponding user U2, and the semi-circular-like arc S4 corresponds to a left lower leg L2 of the corresponding user U2.
Since the shank shape determination unit 121 determines that the semi-circular-like arcs S3 and S4 are adjacent to each other, have the same shape, and have the same intensity value, the user U2 who determines the scanning range AS is facing the speech controller robot 1. At this time, the determining module 12 further generates an analysis start signal. Therefore, the present invention can make the first stage judgment through the judgment module 12, and can avoid the need of analyzing the audio signal of the environment all the time like the prior art, thereby occupying the calculation resource of the voice control robot 1.
Preferably, the distance estimation unit 122 calculates a current distance D between the user U2 and the voice-controlled robot 1 by using the semi-circular-like shape S3 and the semi-circular-like shape S4. For example, the distance estimation unit 122 may calculate a center point P between the block B1 and the block B2 by using a block B1 of the semi-circular-like shape S3 and a block B2 of the semi-circular-like shape S4, and use the center point P to represent the position of the user U2, but not limited thereto. The distance estimation unit 122 may also calculate the centers of the semi-circular arcs S3 and S4 by calculating the curvatures of the semi-circular arcs, and define the midpoint of the line connecting the centers of the semi-circular arcs as the position of the user U2.
The signal determining unit 123 is electrically connected to the distance estimating unit 122, and the determining module 12 generates an analysis start signal only when the current distance D is determined to be smaller than a set distance. In practice, the set distance may be 2 meters, 1 meter, 50 centimeters, etc. to avoid triggering the determining module 12 to generate the analysis start signal by a user facing the voice-controlled robot 1 and actually far away from the voice-controlled robot 1. Generally speaking distance between people is usually within 2 meters, and similarly, to control the robot by using voice, voice commands should be spoken within 2 meters. Therefore, the determining module 12 can determine whether the user is facing the voice-controlled robot 1, and can further determine whether the user is actually close to the voice-controlled robot 1.
The analysis module 13 further receives and analyzes an audio signal SS after receiving the analysis start signal, i.e. steps S101 to S104.
Preferably, the triggering unit 133 is electrically connected to the recording unit 131 and the frequency analyzing unit 132, and compares the audio signal with the wakeup word signal through the frequency analyzing unit 132 when a voice frequency in the audio signal SS reaches a triggering frequency TF in the wakeup word signal.
Preferably, the interval audio setting unit 134 is electrically connected to the recording unit 131 and the frequency analyzing unit 132 for inserting a plurality of interval audios with frequencies substantially equal to zero into the audio signal SS.
For example, the audio signal SS includes a plurality of audio blocks SS1, SS2, SS3, SS4, the audio blocks SS1, SS2, SS3, SS4 correspond to the words spoken by the user U2. Because each person speaks at a different rate, the audio blocks SS1, SS2, SS3, SS4 may not be so distinct as shown in fig. 7. In order to avoid the frequency analysis unit 132 from analyzing the erroneous judgment caused by the fast speech speed of the user U2, the interval audio setting unit 134 sets an interval audio before and after each of the audio blocks SS1, SS2, SS3, and SS 4. As shown in fig. 8, the interval audio setting unit 134 sets an interval audio SP1 and an interval audio SP2 before and after the audio block SS2, and in practice, the time of the interval audio SP1 is about 0.2 seconds. When the audio signal SS is set to have a plurality of intervals, a modified audio signal is formed, and the frequency analysis unit 132 analyzes the modified audio signal.
When the analysis module 13 determines that the audio signal matches the wake-up word signal, it indicates that the user U2 utters the wake-up word, and generates a voice activation signal. When receiving the voice start signal, the control module 14 controls the voice-controlled robot 1 to start the voice mode.
Referring to fig. 9 and 10, fig. 9 is a schematic diagram illustrating a voice-controlled robot according to a preferred embodiment of the present invention executing a command operation; fig. 10 is a schematic diagram illustrating the voice-controlled robot according to the preferred embodiment of the present invention executing another instruction. As shown, the voice-controlled robot 1 enters a voice mode.
In fig. 9, the user U2 utters a voice command O1, wherein the voice command O1 is a meal delivery command. The voice-controlled robot 1 analyzes the voice command O1, and when it is confirmed that the voice command O1 is a meal delivery command, it executes the corresponding command operation, i.e., moves to deliver the meal.
In fig. 10, the user U2 speaks a voice command O2, wherein the voice command O2 is a dialog command. The voice-controlled robot 1 analyzes the voice command O2, and when it is confirmed that the voice command O1 is a dialog command, it executes a corresponding command operation, i.e., responds to the words spoken by the user U2.
In summary, the voice-controlled robot provided by the present invention utilizes the determining module to determine whether there is a user facing the voice-controlled robot, and then utilizes the analyzing module to further analyze whether the user utters the wake-up word, and after the two modules confirm, the voice mode is turned on by the controlling module. In addition, the judging module can further judge whether the user is within the set distance, and the analyzing module can further judge whether the audio signal reaches the trigger frequency, and then the analysis of the awakening word signal and the audio signal is carried out, so that the user is ensured to start the voice mode after facing the voice control robot and speaking the awakening word.
The foregoing detailed description of the preferred embodiments is intended to more clearly illustrate the features and spirit of the present invention, and not to limit the scope of the invention by the preferred embodiments disclosed above. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.

Claims (9)

1. A voice-controlled robot, comprising:
the scanning module is provided with a scanning range and used for scanning the scanning information corresponding to the scanning range;
the judging module is electrically connected with the scanning module and used for receiving and judging the scanning information and generating an analysis starting signal when judging that a user facing the voice control robot exists in the scanning range;
the analysis module is electrically connected with the judgment module and used for receiving and analyzing the audio signal after receiving the analysis starting signal and generating a voice starting signal when judging that the audio signal is a wake-up word signal of the wake-up word spoken by the user; and
the control module is electrically connected with the analysis module and used for controlling the voice control robot to start a voice mode when receiving the voice starting signal;
and the voice control robot executes instruction work corresponding to the voice instruction according to the voice instruction spoken by the user in the voice mode.
2. The voice-controlled robot according to claim 1, wherein the determining module comprises a shank shape determining unit configured to determine that the user facing the voice-controlled robot exists within the scanning range when it is determined that two semi-circular arcs, which are adjacent to each other, have the same shape and the same intensity value, exist in the scanning information.
3. The voice-controlled robot of claim 2, wherein the determining module further comprises a distance estimating unit electrically connected to the shank shape determining unit for calculating a current distance between the user and the voice-controlled robot by using the two semi-circular arcs.
4. The voice-controlled robot according to claim 3, wherein the determining module further comprises a signal determining unit electrically connected to the distance estimating unit for receiving the current distance and generating the analysis start signal when determining that the current distance is smaller than a predetermined distance.
5. The voice-controlled robot according to claim 1, wherein the analysis module includes a recording unit configured to record the audio signal for a preset time upon receiving the analysis start signal.
6. The voice-controlled robot according to claim 5, wherein the analysis module further comprises a frequency analysis unit electrically connected to the recording unit for receiving and comparing the audio signal with the wakeup word signal, and determining that the user utters the wakeup word when the audio signal is compared to be in accordance with the wakeup word signal.
7. The voice-controlled robot according to claim 6, wherein the analysis module further comprises a triggering unit, the triggering unit is electrically connected to the recording unit and the frequency analysis unit, and compares the audio signal with the wakeup word signal through the frequency analysis unit when the voice frequency in the audio signal is determined to reach the triggering frequency of the wakeup word signal.
8. The voice-controlled robot according to claim 6, wherein the analysis module further comprises an interval audio setting unit electrically connected to the recording unit and the frequency analysis unit, for inserting a plurality of interval audios with substantially zero frequency into the audio signal, and comparing the modified audio signal with the wake-up word signal by the frequency analysis unit.
9. The voice-controlled robot according to claim 1, wherein the scanning module comprises a laser scanning unit, and the laser scanning unit is configured to scan out the scanning information corresponding to the scanning range.
CN202010146715.3A 2020-03-05 2020-03-05 Voice control robot Pending CN113359538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010146715.3A CN113359538A (en) 2020-03-05 2020-03-05 Voice control robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010146715.3A CN113359538A (en) 2020-03-05 2020-03-05 Voice control robot

Publications (1)

Publication Number Publication Date
CN113359538A true CN113359538A (en) 2021-09-07

Family

ID=77523619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010146715.3A Pending CN113359538A (en) 2020-03-05 2020-03-05 Voice control robot

Country Status (1)

Country Link
CN (1) CN113359538A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200625157A (en) * 2004-12-29 2006-07-16 Delta Electronics Inc Interactive entertainment center
CN101154385A (en) * 2006-09-28 2008-04-02 北京远大超人机器人科技有限公司 Control method for robot voice motion and its control system
CN104541306A (en) * 2013-08-02 2015-04-22 奥克兰单一服务有限公司 System for neurobehavioural animation
EP2933065A1 (en) * 2014-04-17 2015-10-21 Aldebaran Robotics Humanoid robot with an autonomous life capability
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
US20180173494A1 (en) * 2016-12-15 2018-06-21 Samsung Electronics Co., Ltd. Speech recognition method and apparatus
WO2018205083A1 (en) * 2017-05-08 2018-11-15 深圳前海达闼云端智能科技有限公司 Robot wakeup method and device, and robot
US20190057247A1 (en) * 2016-02-23 2019-02-21 Yutou Technology (Hangzhou) Co., Ltd. Method for awakening intelligent robot, and intelligent robot
CN110154056A (en) * 2019-06-17 2019-08-23 常州摩本智能科技有限公司 Service robot and its man-machine interaction method
CN110853619A (en) * 2018-08-21 2020-02-28 上海博泰悦臻网络技术服务有限公司 Man-machine interaction method, control device, controlled device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200625157A (en) * 2004-12-29 2006-07-16 Delta Electronics Inc Interactive entertainment center
CN101154385A (en) * 2006-09-28 2008-04-02 北京远大超人机器人科技有限公司 Control method for robot voice motion and its control system
CN104541306A (en) * 2013-08-02 2015-04-22 奥克兰单一服务有限公司 System for neurobehavioural animation
EP2933065A1 (en) * 2014-04-17 2015-10-21 Aldebaran Robotics Humanoid robot with an autonomous life capability
CN105468145A (en) * 2015-11-18 2016-04-06 北京航空航天大学 Robot man-machine interaction method and device based on gesture and voice recognition
US20190057247A1 (en) * 2016-02-23 2019-02-21 Yutou Technology (Hangzhou) Co., Ltd. Method for awakening intelligent robot, and intelligent robot
US20180173494A1 (en) * 2016-12-15 2018-06-21 Samsung Electronics Co., Ltd. Speech recognition method and apparatus
WO2018205083A1 (en) * 2017-05-08 2018-11-15 深圳前海达闼云端智能科技有限公司 Robot wakeup method and device, and robot
CN110853619A (en) * 2018-08-21 2020-02-28 上海博泰悦臻网络技术服务有限公司 Man-machine interaction method, control device, controlled device and storage medium
CN110154056A (en) * 2019-06-17 2019-08-23 常州摩本智能科技有限公司 Service robot and its man-machine interaction method

Similar Documents

Publication Publication Date Title
KR102293063B1 (en) Customizable wake-up voice commands
JP2964518B2 (en) Voice control method
JP3674990B2 (en) Speech recognition dialogue apparatus and speech recognition dialogue processing method
JP3979209B2 (en) Data input method and data input device
CN107220532B (en) Method and apparatus for recognizing user identity through voice
US20060080096A1 (en) Signal end-pointing method and system
KR20100081587A (en) Sound recognition apparatus of robot and method for controlling the same
AU693122B2 (en) Speech recognition using bio-signals
CN109210703B (en) Voice control method of air conditioner and voice-controlled air conditioner
JP5431282B2 (en) Spoken dialogue apparatus, method and program
WO2007138741A1 (en) Voice input system, interactive robot, voice input method, and voice input program
KR101151571B1 (en) Speech recognition environment control apparatus for spoken dialog system and method thereof
CN113359538A (en) Voice control robot
TWI735168B (en) Voice robot
EP3195314B1 (en) Methods and apparatus for unsupervised wakeup
JPH03120598A (en) Method and device for voice recognition
JP2004212533A (en) Voice command adaptive equipment operating device, voice command adaptive equipment, program, and recording medium
WO2017051627A1 (en) Speech production apparatus and speech production method
KR20200010149A (en) Apparatus for recognizing call sign and method for the same
JP2003255987A (en) Method, unit, and program for control over equipment using speech recognition
US20220139379A1 (en) Wake word method to prolong the conversational state between human and a machine in edge devices
JP3846500B2 (en) Speech recognition dialogue apparatus and speech recognition dialogue processing method
JP2003058184A (en) Equipment control system, device, method and program for recognizing voice
JP2010204260A (en) Interactive device
CN111354358B (en) Control method, voice interaction device, voice recognition server, storage medium, and control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination