CN212907066U - System for optimizing voice control - Google Patents

System for optimizing voice control Download PDF

Info

Publication number
CN212907066U
CN212907066U CN202021841397.4U CN202021841397U CN212907066U CN 212907066 U CN212907066 U CN 212907066U CN 202021841397 U CN202021841397 U CN 202021841397U CN 212907066 U CN212907066 U CN 212907066U
Authority
CN
China
Prior art keywords
voice
module
information
control
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202021841397.4U
Other languages
Chinese (zh)
Inventor
汤智文
刘胜利
唐韧
叶鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong A Ok Technology Grand Development Co Ltd
Original Assignee
Guangdong A Ok Technology Grand Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong A Ok Technology Grand Development Co Ltd filed Critical Guangdong A Ok Technology Grand Development Co Ltd
Priority to CN202021841397.4U priority Critical patent/CN212907066U/en
Application granted granted Critical
Publication of CN212907066U publication Critical patent/CN212907066U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses a system for optimizing voice control, which comprises a voice recognition module, a voice control module and a voice editing module, wherein the voice control module and the voice editing module are respectively connected with the voice recognition module; the voice recognition module is used for receiving the voice information, performing command recognition on the voice information according to the recognition mode, and converting the voice information recognized as the command into control information; the voice control module receives the control information and executes the action according to the control information, and the voice editing module is used for editing the recognition mode of the voice recognition module. According to the voice control method and device, the voice editing module is arranged, the voice recognition module can be edited in a recognition mode, the voice control is optimized, voice information used by an operator daily can be identified as control information in a self-defined mode, and the voice control convenience and operation experience are greatly improved.

Description

System for optimizing voice control
Technical Field
The utility model relates to an automatic control technical field that opens and shuts the curtain, specificly relate to an optimize speech control's system.
Background
The electric curtain is widely used in various buildings, and with the development of science and technology, a voice-controlled electric curtain is developed. In the prior art, specific languages and cavity tones need to be adopted for voice control of the electric curtain, however, different regions and different languages and cavity tones are used by people in daily life, so that the languages and the cavity tones are often required to be adjusted for many times to meet the specific languages and the cavity tones of the electric curtain to be finished during voice control, inconvenience of voice control is greatly caused, and operation experience of voice control of an operator is reduced.
SUMMERY OF THE UTILITY MODEL
To the not enough of prior art, the utility model provides an optimize voice control's system.
A system for optimizing voice control comprising:
the voice recognition module is used for receiving the voice information, performing command recognition on the voice information according to the recognition mode, and converting the voice information recognized as the command into control information;
the voice control module is connected with the voice recognition module; the voice control module receives the control information and executes actions according to the control information; and
the voice editing module is connected with the voice recognition module; the voice editing module is used for editing the recognition mode of the voice recognition module.
According to an embodiment of the present invention, the mobile terminal further comprises a terminal module; the terminal module is connected with the voice editing module; the terminal module is used for sending the editing information to the voice editing module, and the voice editing module edits the recognition mode of the voice recognition module according to the editing information.
According to an embodiment of the present invention, the voice recognition module includes a voice recognition unit and a voice processing unit; the voice recognition unit is connected with the voice processing unit; the voice processing unit stores an identification mode; the voice recognition unit receives the voice information and transmits the voice information to the voice processing unit; the voice processing unit carries out command recognition on the voice information according to the recognition mode and converts the voice information recognized as the command into control information.
According to an embodiment of the present invention, the voice control module includes a voice control unit and an action execution unit; the voice control unit is connected with the action execution unit; the voice control unit receives the control information, forms a control instruction and transmits the control instruction to the action execution unit; the action execution unit executes the action according to the control instruction.
According to an embodiment of the present invention, the voice editing module includes a voice editing unit and a first wireless communication unit; the voice editing unit is connected with the first wireless communication unit; the terminal module comprises an input unit and a second wireless communication unit; the input unit is connected with the second wireless communication unit; the first wireless communication unit is wirelessly connected with the second wireless communication unit; the editing information input by the input unit is wirelessly transmitted to the voice editing unit through the cooperation of the second wireless communication unit and the first wireless communication unit; the voice editing unit changes the recognition mode of the voice recognition module according to the editing information.
According to the utility model discloses an embodiment, the identification pattern includes the language type.
According to an embodiment of the present invention, the recognition mode further comprises a voice cavity tone
Compared with the prior art, through the setting of voice editing module, realize the edition to the recognition mode of speech recognition module, realize speech control's optimization for the speech information of operator daily use can be self-defined is discerned for control information, very big increase speech control's convenience and operation experience.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic structural diagram of a system for optimizing speech control according to a first embodiment;
fig. 2 is a flowchart of a method for optimizing voice control according to a second embodiment.
Detailed Description
In the following description, numerous implementation details are set forth in order to provide a more thorough understanding of the present invention. It should be understood, however, that these implementation details should not be used to limit the invention. That is, in some embodiments of the invention, details of these implementations are not necessary. In addition, some conventional structures and components are shown in simplified schematic form in the drawings.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the present embodiment of the invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are only for description purposes, not specifically referring to the order or sequence, and are not intended to limit the present invention, but only to distinguish the components or operations described in the same technical terms, and are not to be construed as indicating or implying any relative importance or implicit indication of the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions in the embodiments may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
For further understanding of the contents, features and effects of the present invention, the following embodiments are exemplified in conjunction with the accompanying drawings as follows:
example one
Referring to fig. 1, fig. 1 is a schematic structural diagram of a system for optimizing voice control according to a first embodiment. The system for optimizing voice control in this embodiment includes a voice recognition module 1, a voice control module 2, and a voice editing module 3. The voice recognition module 1 is configured to receive voice information, perform command recognition on the voice information according to a recognition mode, and convert the voice information recognized as a command into control information. The voice control module 2 is connected with the voice recognition module 1, and the voice control module 2 receives the control information and executes the action according to the control information. The voice editing module 3 is connected with the voice recognition module 1, and the voice editing module 3 is used for editing the recognition mode of the voice recognition module 1.
Through the setting of the voice editing module 3, the editing of the recognition mode of the voice recognition module 1 is realized, and the optimization of voice control is realized, so that the voice information used by an operator in daily life can be identified by self-defining for the control information, and the convenience and the operation experience of the voice control are greatly improved.
Referring back to fig. 1, further, the system for optimizing voice control in this embodiment further includes a terminal module 4. The terminal module 4 is connected with the voice editing module 3. The terminal module 4 is used for sending the editing information to the voice editing module 3, and the voice editing module 3 edits the recognition mode of the voice recognition module 1 according to the editing information. Through the setting of the terminal module 4, the operator can conveniently edit the input of the information, and the convenience of voice control optimization in operation is increased.
Referring back to fig. 1, further, the speech recognition module 1 includes a speech recognition unit 11 and a speech processing unit 12. The speech recognition unit 11 is connected to the speech processing unit 12. The speech processing unit 12 stores therein the recognition pattern. The voice recognition unit 11 receives voice information and passes the voice information to the voice processing unit 12. The voice processing unit 12 performs command recognition on the voice information according to the recognition mode, and converts the voice information recognized as a command into control information.
Specifically, the voice recognition unit 11 may employ an existing voice recognizer or a voice recognition circuit, such as a microphone, which can recognize and input voice generated by a human body, and the voice information in this embodiment is speech sound generated by the human body. The voice processing unit 12 may employ an MCU chip having storage and voice processing functions. After the voice recognition unit 11 transmits the voice information to the voice processing unit 12, the voice processing unit 12 performs command recognition on the voice information according to the recognition mode, that is, analyzes and judges the voice information, and converts the voice information into control information when the voice information is judged to be a command.
It can be understood that the subsequent operation is necessary only if the voice uttered by the operator is a valid instruction, otherwise, if the voice uttered by the operation is an erroneous operation, the subsequent operation is meaningless, which is the meaning that the voice processing unit 12 performs command recognition on the voice information according to the recognition mode. The recognition mode here is a command voice in which the language type and the voice intonation are set by the operator himself through the voice editing module 3, and the language type and the voice intonation are familiar and customary for the operator to use daily. The language type can be national language such as "Mandarin", "English", "German", etc., or local language such as "Guangdong", "Minnan", "Henan" and "Shanghai", etc., and the tone of voice cavity is different accents of the above language types.
In this embodiment, the following method is adopted to perform command recognition on the voice information: and storing preset command information in the voice processing 21, if the voice information is matched with the preset command information, judging that the voice information is a command, and if not, judging that the voice information is not a command. For example, there is a command word such as "open a window curtain" or "close a window curtain" as the preset command information, that is, the preset command voice, in the recognition mode stored in the voice processing unit 12, when the voice information input by the voice recognition unit 11 after being analyzed matches the command voice such as "open a window curtain" or "close a window curtain", the voice processing unit 12 determines the voice information as a command, and if the voice information input by the voice recognition unit 11 after being analyzed does not match the command voice such as "open a window curtain" or "close a window curtain", the voice processing unit 12 determines the voice information as a non-command.
After the voice information is determined as a command, the voice processing unit 12 converts the voice information determined as a command into control information according to the recognition mode. For example, after the voice message of "opening the curtain" or "closing the curtain" is recognized as a command, the voice processing unit 12 converts the voice message into control information adapted to the "opening the curtain" or "closing the curtain".
Preferably, the voice control module 2 includes a voice control unit 21 and an action execution unit 22. The voice control unit 21 is connected to the action execution unit 22. The voice control unit 21 receives the control information and forms a control instruction to be transmitted to the action execution unit 22. The action execution unit 22 executes an action according to the control instruction.
Specifically, the voice control unit 21 is connected to the voice processing unit 12. The voice processing unit 12 judges the voice information as a command, and after converting the voice information judged as a command into control information according to the recognition mode, transfers the control information to the voice control unit 21. The voice control unit 21 in this embodiment is an MCU chip having a control function, such as a motor control chip. The motion executing unit 22 is a device having a motion executing function, such as a motor for a roller shutter. After the voice processing unit 12 transmits the control information to the voice control unit 21, the voice control unit 21 forms a control command such as forward rotation or reverse rotation to the action execution unit 22, and the action execution unit 22 completes the corresponding action, thereby realizing the action of opening and closing the curtain.
Referring back to fig. 1, further, the voice editing module 3 includes a voice editing unit 31 and a first wireless communication unit 32. The voice editing unit 31 is connected to the first wireless communication unit 32. The terminal module 4 includes an input unit 41 and a second wireless communication unit 42. The input unit 41 is connected to the second wireless communication unit 42. The first wireless communication unit 32 is wirelessly connected with the second wireless communication unit 42. The editing information inputted by the input unit 41 is wirelessly transmitted to the voice editing unit 31 through cooperation of the second wireless communication unit 42 and the first wireless communication unit 32. The voice editing unit 31 changes the recognition mode of the voice recognition module 1 according to the editing information.
The voice editing unit 31 is connected to the voice processing unit 12. When the operator inputs the edit information through the input unit 41, the voice editing unit 31 edits and changes the recognition mode in the voice processing unit 12 based on the edit information. Specifically, the input unit 41 may be an APP built in the smartphone, such as a wechat applet. The input unit 41 is activated when the operator needs to edit the recognition mode of the speech recognition module 1, i.e., when the operator needs to change the type of language and the tone of the speech for control. After the input unit 41 is started, the wireless connection state of the second wireless communication unit 42 with the first wireless communication unit 32 is activated. In this embodiment, the first wireless communication unit 42 is a bluetooth module built in the smart phone, and the first wireless communication unit 32 is also a bluetooth module, so that information interaction between the input unit 41 and the voice editing unit 31 can be realized through bluetooth signals after the two units are wirelessly connected. The voice editing unit 31 in this embodiment is an MCU chip with editing function, for example, a chip with burning function, and may also be a burner, which can change the recognition mode stored in the voice processing unit 12, i.e., can realize editing and changing the language type and voice tone of voice recognition. For example, the input unit 41 has an editing input button, after the input unit 41 and the voice editing unit 31 realize information interaction connection, and after the operator presses the editing input button for a long time, the operator speaks corresponding editing information according to the language and cavity tone habit in daily use, for example, the operator speaks cantonese voices of "opening a curtain" and "closing a curtain", after the input is completed, the input unit 41 transmits the cantonese voices of "opening a curtain" and "closing a curtain" to the voice editing unit 31 through the second wireless communication unit 42 and the first wireless communication unit 32, the voice editing unit 31 deletes the recognition mode originally stored in the voice processing unit 12, and enters the new recognition modes of "opening a curtain" and "closing a curtain" into the voice processing unit 12 to form a new recognition mode, thereby completing the editing of the recognition mode, the recognition mode of the speech processing unit 12 is changed.
Preferably, in order to ensure the accuracy of inputting the editing information, the input unit 41 may further incorporate an editing information confirmation and verification function. And the editing information confirmation is to display and play the editing information input by the operator on the APP of the smart phone to confirm the editing information, to input the editing information for the second time by the operator during the editing information verification, to verify the matching with the editing information input for the first time, and to complete the verification if the editing information is consistent.
Thus, the operator cooperates with the voice editing unit 31 through the input unit 41 to complete the editing and changing of the recognition mode of the voice processing unit 12, so that the operator can control the customized changing of the voice according to own habits and preferences, and the convenience and the operation experience of the voice control are greatly improved.
Example two
Referring to fig. 2, fig. 2 is a flowchart of a method for optimizing voice control according to a second embodiment. The method for optimizing voice control in the present embodiment can be implemented based on the system for optimizing voice control in the embodiment, and specifically includes the following steps:
s1, the speech editing module 3 edits the recognition mode of the speech recognition module 1.
S2, the speech recognition module 1 receives the speech information, performs command recognition on the speech information according to the edited recognition mode, and converts the speech information recognized as a command into control information.
S3, the voice control module 2 performs an action according to the control information.
The recognition mode of the voice recognition module 1 is edited through the voice editing module 3, so that the recognition mode in the voice recognition module 1 can be edited and modified in a user-defined mode according to the habit and the preference of an operator, and the convenience and the operation experience of voice control are greatly improved.
Preferably, in step S1, the speech editing module 3 edits the recognition mode of the speech recognition module 1, and further includes the following steps:
s0, the terminal module 4 sends the editing information to the voice editing module 3.
The operator sends the editing information to the editing module 3 through the terminal module 4, so that the editing convenience is greatly improved.
Preferably, in step S0, the terminal module 4 sends the edit information to the voice editing module 3, and the method further includes:
s00, the terminal module 4 sends confirmation information and verification information for confirmation by the operator. The terminal module 4 ensures the accuracy of the edited information by sending confirmation information and verifying the confirmation by the operator.
In step S00, the terminal module 4 sends the confirmation information and the verification information for confirmation by the operator, which previously further includes:
and S000, the operator inputs the editing information through the terminal module 4.
The recognition mode in this embodiment includes a language type, and preferably, the speech recognition mode further includes a speech pitch.
The implementation of the steps S000, S00, S0, S1, S2, and S3 can refer to the system for optimizing voice control in the first embodiment, and will not be described herein again.
In conclusion, through the setting of the voice editing module, the editing of the recognition mode of the voice recognition module is realized, the optimization of voice control is realized, the voice information used by an operator in daily life can be identified as the control information in a self-defined mode, and the convenience and the operation experience of the voice control are greatly improved.
The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (7)

1. A system for optimizing speech control, comprising:
the voice recognition module (1) is used for receiving voice information, performing command recognition on the voice information according to a recognition mode, and converting the voice information recognized as a command into control information;
the voice control module (2) is connected with the voice recognition module (1); the voice control module (2) receives the control information and executes actions according to the control information; and
the voice editing module (3) is connected with the voice recognition module (1); the voice editing module (3) is used for editing the recognition mode of the voice recognition module (1).
2. A system for optimizing speech control according to claim 1, characterized in that it further comprises a terminal module (4); the terminal module (4) is connected with the voice editing module (3); the terminal module (4) is used for sending editing information to the voice editing module (3), and the voice editing module (3) edits the recognition mode of the voice recognition module (1) according to the editing information.
3. The system for optimizing speech control according to claim 1, wherein the speech recognition module (1) comprises a speech recognition unit (11) and a speech processing unit (12); the voice recognition unit (11) is connected with the voice processing unit (12); the voice processing unit (12) stores the recognition mode; the voice recognition unit (11) receives the voice information and transmits the voice information to the voice processing unit (12); the voice processing unit (12) performs command recognition on the voice information according to the recognition mode, and converts the voice information recognized as a command into control information.
4. The system for optimizing voice control according to claim 1, wherein the voice control module (2) comprises a voice control unit (21) and an action execution unit (22); the voice control unit (21) is connected with the action execution unit (22); the voice control unit (21) receives the control information, forms a control instruction and transmits the control instruction to the action execution unit (22); the action execution unit (22) executes an action according to the control instruction.
5. The system for optimizing voice control according to claim 2, wherein the voice editing module (3) comprises a voice editing unit (31) and a first wireless communication unit (32); the voice editing unit (31) is connected with the first wireless communication unit (32); the terminal module (4) comprises an input unit (41) and a second wireless communication unit (42); the input unit (41) is connected with the second wireless communication unit (42); the first wireless communication unit (32) is wirelessly connected with the second wireless communication unit (42); the editing information inputted by the input unit (41) is wirelessly transmitted to the voice editing unit (31) through cooperation of the second wireless communication unit (42) and the first wireless communication unit (32); the voice editing unit (31) changes the recognition mode of the voice recognition module (1) according to the editing information.
6. The system for optimizing speech control of claim 1 wherein the recognition mode comprises a language type.
7. The system for optimizing speech control of claim 1 wherein the recognition mode further comprises a speech accent.
CN202021841397.4U 2020-08-28 2020-08-28 System for optimizing voice control Active CN212907066U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202021841397.4U CN212907066U (en) 2020-08-28 2020-08-28 System for optimizing voice control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202021841397.4U CN212907066U (en) 2020-08-28 2020-08-28 System for optimizing voice control

Publications (1)

Publication Number Publication Date
CN212907066U true CN212907066U (en) 2021-04-06

Family

ID=75251188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202021841397.4U Active CN212907066U (en) 2020-08-28 2020-08-28 System for optimizing voice control

Country Status (1)

Country Link
CN (1) CN212907066U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022041319A1 (en) * 2020-08-28 2022-03-03 广东奥科伟业科技发展有限公司 Voice control optimization system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022041319A1 (en) * 2020-08-28 2022-03-03 广东奥科伟业科技发展有限公司 Voice control optimization system and method

Similar Documents

Publication Publication Date Title
US7050550B2 (en) Method for the training or adaptation of a speech recognition device
US8676582B2 (en) System and method for speech recognition using a reduced user dictionary, and computer readable storage medium therefor
US6839670B1 (en) Process for automatic control of one or more devices by voice commands or by real-time voice dialog and apparatus for carrying out this process
EP1171870B1 (en) Spoken user interface for speech-enabled devices
CN110998720A (en) Voice data processing method and electronic device supporting the same
EP2311031B1 (en) Method and device for converting speech
KR102056330B1 (en) Apparatus for interpreting and method thereof
EP2311030A1 (en) Method and device for converting speech
CN111325039B (en) Language translation method, system, program and handheld terminal based on real-time call
CA2319997A1 (en) Method and system of operating portable phone by voice recognition
KR102060775B1 (en) Electronic device for performing operation corresponding to voice input
US20060190260A1 (en) Selecting an order of elements for a speech synthesis
CN212907066U (en) System for optimizing voice control
JP2010197669A (en) Portable terminal, editing guiding program, and editing device
CN111091819A (en) Voice recognition device and method, voice interaction system and method
KR20050015585A (en) Apparatus And Method for Enhanced Voice Recognition
CN110400568B (en) Awakening method of intelligent voice system, intelligent voice system and vehicle
CN111986672A (en) System and method for optimizing voice control
CN113228167B (en) Voice control method and device
CN113763935A (en) Method and system for controlling electric appliance of vehicle body through voice outside vehicle, vehicle and storage medium
KR20220125523A (en) Electronic device and method for processing voice input and recording in the same
KR102056329B1 (en) Method for interpreting
JP2014016402A (en) Speech input device
KR20230153854A (en) User terminal, method for controlling user terminal and dialogue management method
KR20190061705A (en) Apparatus and method for recognizing voice using manual operation

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant