CN111128136A - User-defined voice control method, computer equipment and readable storage medium - Google Patents

User-defined voice control method, computer equipment and readable storage medium Download PDF

Info

Publication number
CN111128136A
CN111128136A CN201911188742.0A CN201911188742A CN111128136A CN 111128136 A CN111128136 A CN 111128136A CN 201911188742 A CN201911188742 A CN 201911188742A CN 111128136 A CN111128136 A CN 111128136A
Authority
CN
China
Prior art keywords
user
control instruction
control
voice
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911188742.0A
Other languages
Chinese (zh)
Inventor
凌华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingluo Intelligent Technology Co Ltd
Original Assignee
Xingluo Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingluo Intelligent Technology Co Ltd filed Critical Xingluo Intelligent Technology Co Ltd
Priority to CN201911188742.0A priority Critical patent/CN111128136A/en
Publication of CN111128136A publication Critical patent/CN111128136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a user-defined voice control method, which comprises the following steps: acquiring user-defined voice information which is required to be added by a user; performing character conversion on the acquired user-defined voice information and displaying the user-defined voice information; training the user-defined voice information by combining the reference characters input by the user, and confirming a control instruction; after the control instruction is confirmed, recording an execution scheme set by a user; and establishing an incidence relation between the control instruction and the execution scheme, and enabling a user to control the execution scheme through the control instruction. Compared with the prior art, the method and the device can set the control instruction according to personal preference, cannot be influenced by the standard limitation of a voice platform of a developer, and have high degree of freedom. The user can also set the execution scheme corresponding to the control instruction individually, and the infinite extension of the control instruction can be realized by establishing the incidence relation, the intelligent level of the whole control process is high, and the user experience is good.

Description

User-defined voice control method, computer equipment and readable storage medium
Technical Field
The invention relates to the technical field of smart home, in particular to a user-defined voice control method, computer equipment and a readable storage medium.
Background
The intelligent home integrates equipment related to home life by taking a house as a platform and utilizing Internet of things technologies such as a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology, an audio and video technology and the like so as to realize comprehensive management. With the continuous progress of society, the requirements of people on the quality of life are gradually improved, so that higher requirements are provided for the control of smart homes.
With the rapid development of voice recognition, each large company carries out deep research on voice recognition directions and gradually provides a respective voice platform, and developers can add functions to the voice platform according to the demands and creatives of the developers and realize calling. The current intelligent household products can already support the voice recognition function, and belong to the necessary technology of the intelligent household products.
In the prior art, a user performs one or more specific functions directly composed of one or more intentions through a voice conversation. For example, a play music function, the user may say "play music" to automatically turn on the player to play music; and the bedroom lamp is turned on, and the user says 'turn on the bedroom lamp', and then the lamp capable of sending a command to control the turn on of the bedroom is set. However, at the end, the functions are added on the voice platform by the developer and are the same for all users, and the scheme does not consider the differences of the users, so that the control process is over-standardized, the characteristics of individuation, interest and the like are lacked, the expandability of the intelligent device control is limited, and the user experience is poor.
Disclosure of Invention
The invention provides a user-defined voice control method, computer equipment and a readable storage medium for solving the technical problem.
According to one aspect of the invention, a customized voice control method is provided, which comprises:
acquiring user-defined voice information which is required to be added by a user;
performing character conversion on the acquired user-defined voice information and displaying the user-defined voice information;
training the user-defined voice information by combining the reference characters input by the user, and confirming a control instruction;
after the control instruction is confirmed, recording an execution scheme set by a user;
and establishing an incidence relation between the control instruction and the execution scheme, and enabling a user to control the execution scheme through the control instruction.
Further, the step of training the user-defined voice message by combining the reference characters input by the user and confirming the control instruction comprises:
acquiring reference characters input by a user;
comparing the converted text information with reference text input by a user to determine qualified voice information;
and packaging at least two pieces of qualified voice information to form the control instruction.
Further, after the step of packaging at least two pieces of qualified voice information to form the control instruction, the method further includes:
judging whether the same control instruction exists or not; if yes, the control instruction is not added; and if the control instruction does not exist, confirming and adding the control instruction.
Further, in the step of comparing the converted text information with reference text input by a user and determining qualified voice information, judging whether the text matching degree between the reference text and the converted text information is greater than a first threshold value; when the matching degree is smaller than the first threshold value, the user-defined voice information is determined to be unqualified, and the user needs to add the user again; and when the matching degree is greater than the first threshold value, marking the self-defined voice information with the reference characters and then carrying out associated recording.
Further, after the step of establishing the association relationship between the control instruction and the execution scheme, the method further includes:
and updating and storing the established association relation in a cloud server.
Further, the step of performing update storage on the established association relationship in the cloud server includes:
judging whether the execution scheme has a starting instruction or not;
if the execution scheme has the corresponding starting instruction, adding and storing the control instruction in a part of the starting instruction in the cloud server; and if the execution scheme does not have a corresponding starting instruction, updating and storing the control instruction and the execution scheme in the cloud server.
Further, before the step of obtaining the voice information added by the user, the method further includes:
and determining the identity of the user, and triggering an instruction adding function through a set triggering condition.
Further, the identity of the user is confirmed through voiceprint recognition; the trigger instruction adding function is confirmed by detecting a voice keyword.
According to another aspect of the present invention, there is provided a computer device comprising a processor and a memory, the processor being coupled to the memory, the processor being operative to execute instructions to implement the custom voice control method described above.
According to another aspect of the present invention, there is provided a readable storage medium having stored thereon a computer program to be executed by a processor to implement the above-described custom voice control method.
Compared with the prior art, the user-defined voice control method, the computer equipment and the readable storage medium provided by the invention have the advantages that firstly, a user can set the control instruction in a training mode according to personal preference, the influence of the standard limit of a voice platform of a developer is avoided, and the degree of freedom is high. In addition, the user can also set the execution scheme corresponding to the control instruction individually, and by establishing the incidence relation between the control instruction and the execution scheme, the infinite extension of the control instruction can be realized, the intelligent level of the whole control process is high, and the experience of the user is improved.
Drawings
FIG. 1 is a flow chart of a method for providing a customized voice control method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of step S400 in FIG. 1;
FIG. 3 is a detailed flowchart of step S800 in FIG. 1;
FIG. 4 provides a schematic block diagram of a computer apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a flow chart of a method for providing a customized voice control method according to an embodiment of the present invention includes:
s100, determining the identity of a user, and triggering an instruction adding function through a set triggering condition;
first, control lists are respectively stored in the cloud server corresponding to different users, and the control lists at least comprise contents such as starting instructions, execution schemes and the like. In addition, due to the fact that multiple users exist in a household, if the users are not distinguished, multiple execution effects may occur in one same instruction (for example, if the user a adds a "light-on" instruction to turn on a hall lantern correspondingly, and the user B adds a "light-on" instruction to turn on a hall lantern correspondingly, when the user sends a "light-on" instruction, it is impossible to definitely turn on the hall lantern or the hall lantern, so confusion may occur). Therefore, in order to avoid the above situation, when the user needs to add an instruction, the identity of the user needs to be determined first, and the instruction added by the user should be operated only by the user himself. In the embodiment of the invention, the voice characteristics of each user are stored in advance, and the identity of the user is determined by combining the user voice acquired in real time and in a voiceprint recognition mode.
After confirming the identity information of the user, the user needs to provide a trigger signal to trigger the instruction adding function. In the embodiment of the present invention, the trigger instruction adding function is confirmed by detecting the form of the voice keyword. The voice keyword is a standardized rule usually set by a developer voice platform, such as "start editing function" or "edit" input by a user voice, and since an "edit" word is detected in the voice input, an instruction adding function can be triggered. In other embodiments of the present invention, the user may also implement triggering of the instruction adding function by clicking a setting key or a virtual key on the smart device or by operating in the mobile terminal APP.
S200, obtaining user-defined voice information which needs to be added by a user;
after the user triggers the instruction adding function, the user provides user-defined voice information to the intelligent device, and the voice information is acquired through a microphone installed on the intelligent device. In the embodiment of the present invention, the content of the speech information is not limited, but it should be ensured in principle that the speech information does not violate social fairness; in addition, the added voice information also supports local dialects. In the preferred embodiment of the invention, the customized voice information is intelligently detected, and the voice information violating social justice is not added.
S300, performing character conversion on the acquired custom voice information and displaying the custom voice information;
after the intelligent device acquires the user-defined voice information provided by the user, the user-defined voice information is subjected to character conversion, and the converted characters are displayed on a display screen of the intelligent device or a display screen of the mobile terminal, so that the user can observe and confirm the characters.
S400, training the user-defined voice information by combining the reference characters input by the user, and confirming a control instruction;
the purpose of this step is to implement accurate definition of the control command by way of training, please refer to fig. 2, which is a detailed flowchart of step S400 in fig. 1, and step S400 includes:
s410, acquiring reference characters input by a user;
in this step, since there is a case where part of the voice information provided by the user has accents, local dialects, or network expressions, the voice platform of the developer cannot effectively recognize the information. Therefore, after the intelligent device performs text conversion on the user-defined voice information provided by the user, the user is required to input correct voice information in a text form in the intelligent device or the mobile terminal APP to form a reference, and the reference is used as a subsequent comparison standard.
S420, comparing the converted text information with reference texts input by a user, and determining qualified voice information;
the purpose of this step is to confirm the qualified voice message by combining the text similarity comparison method. Specifically, whether the character matching degree between the reference character and the converted character is greater than a first threshold value is judged; when the matching degree is smaller than a first threshold value, the user-defined voice information is determined to be unqualified and needs to be added again by the user; and when the matching degree is greater than a first threshold value, the user-defined voice information is marked with the reference characters and then associated and recorded.
Taking Chinese as an example, the Chinese characters have differences between flat tongue and warped tongue on pronunciation, and each pronunciation has differences of four phonetic tones, so that the Chinese characters can be used for matching degree identification. In the embodiment of the invention, the matching degree of the characters is confirmed by combining a plurality of aspects of whether the pronunciation, the tone and the characters are equal. In a preferred embodiment of the invention, the flat tongue reading is considered to be the same as the warped tongue reading.
For example, when the user-defined speech input by the user contains four characters, the corresponding weight coefficients of each character are the same and are all 25%, and then the weight values (score values and weight coefficients) of each character are accumulated to complete the calculation of the character matching degree.
The correspondence of each score is found in the following table:
whether the characters are the same or not Whether the tones are the same Whether the pronunciation is the same Score value
Is that —— —— 1
Whether or not Whether or not Whether or not 0.7
Whether or not Whether or not Is that 0.8
Whether or not Is that Whether or not 0.8
Whether or not Is that Is that 0.9
When the characters are the same, the value is directly determined to be 1 without considering the tone and the pronunciation; when the characters are different and the tone and the pronunciation are the same, the score is determined to be 0.9; when the characters are different and one of the tone or the pronunciation is the same, the score is determined to be 0.8; when the characters, tones and pronunciations are different, the score is 0.7. In this embodiment, the weighted values of the words are accumulated, and when the accumulated value is greater than or equal to 0.9, the qualified voice message is determined, and then the qualified voice message is marked with the reference characters input by the user and then stored. And when the accumulated value is less than 0.9, the voice message is determined to be unqualified and needs to be input again by the user.
S430, packaging at least two qualified voice messages to form a control instruction;
in theory, the user input customized voice information each time cannot be completely the same, and multiple training and learning are needed to achieve the ideal effect. Therefore, in the embodiment of the present invention, at least two pieces of qualified voice information should be included in the control command.
S440, judging whether the same control instruction exists or not; if the same control instruction exists, the process is ended; if there is no identical control instruction, the process proceeds to step S450.
In this step, by calling the control list corresponding to the user identity in the cloud server and comparing the control instruction formed in step S430 with the start instruction in the control list, if the same control instruction already exists in the control list, the control instruction is not added.
And S450, confirming and adding the control command.
S500, recording an execution scheme set by a user after the control instruction is confirmed;
in the embodiment of the invention, the user can set the execution scheme (such as playing music, turning on/off a television, turning on/off a lamp, turning on/off a curtain and the like) for the confirmed control instruction in the mobile terminal APP.
S600, establishing an incidence relation between a control instruction and an execution scheme, wherein a user can control the execution scheme through the control instruction;
in the embodiment of the invention, through the established incidence relation between the control instruction and the execution scheme, the user can realize the control of the set execution scheme only by inputting the control instruction through voice.
S700, updating and storing the established association relation in the cloud server.
Please refer to fig. 3, which is a detailed flowchart of step S700 in fig. 1; the step S700 includes:
s710, judging whether the execution scheme has a starting instruction; if the starting instruction exists, the step S720 is entered; if the execution plan has no corresponding start instruction, the process proceeds to step S730.
Specifically, for example, the execution scheme of the music playing function can be activated by a control instruction of "playing music" already in the user control list, and when the user-defined speech "hi" also performs only playing music as well, it is recognized that the activation instruction exists previously.
S720, adding and storing the control instruction in the part of the starting instruction in the cloud server;
in the embodiment of the present invention, since there is already a previous start instruction, in this step, only an action of adding an instruction needs to be performed on the start instruction part corresponding to the execution plan, and the user may start the execution plan through the original start instruction or the newly added control instruction.
And S730, updating and storing the control instruction and the execution scheme in the cloud server.
In the embodiment of the present invention, since there is no previous start instruction, it is necessary to update and store one piece of information about the control instruction and the execution scheme in the control list corresponding to the user identity in the cloud server.
In the actual operation process, firstly, the identity of a user needs to be identified; after the identity recognition is finished, calling a control list stored in a cloud server by a user; the user can speak the start instruction in the control list by means of voice input to execute the corresponding control scheme. In the process, the requirements for identity recognition of a plurality of users and control function calling of different users can be supported.
Referring to FIG. 4, a schematic block diagram of a computer apparatus is provided according to an embodiment of the present invention. The computer device in this embodiment comprises a processor 410 and a memory 420, wherein the processor 410 is coupled to the memory 420, and the processor 410 executes instructions to implement the customized voice control method in any of the above embodiments when operating.
The processor 410 may also be referred to as a Central Processing Unit (CPU), among others. The processor 410 may be an integrated circuit chip having signal processing capabilities. The processor 410 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor, but is not limited thereto.
Referring to fig. 5, a schematic block diagram of a readable storage medium according to an embodiment of the invention is shown. The readable storage medium in this embodiment stores a computer program 510, and the computer program 510 can be executed by a processor to implement the customized voice control method in any of the above embodiments.
Alternatively, the readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a terminal device such as a computer, a server, a mobile phone, or a tablet.
Compared with the prior art, the user-defined voice control method, the computer equipment and the readable storage medium provided by the invention have the advantages that firstly, a user can set the control instruction in a training mode according to personal preference, the influence of the standard limit of a voice platform of a developer is avoided, and the degree of freedom is high. In addition, the user can also set the execution scheme corresponding to the control instruction individually, and by establishing the incidence relation between the control instruction and the execution scheme, the infinite extension of the control instruction can be realized, the intelligent level of the whole control process is high, and the experience of the user is improved.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Any modification, equivalent replacement and improvement made within the technical idea of using the present invention should be within the scope of the right of the present invention.

Claims (10)

1. A method for controlling a user-defined voice is characterized by comprising the following steps:
acquiring user-defined voice information which is required to be added by a user;
performing character conversion on the acquired user-defined voice information and displaying the user-defined voice information;
training the user-defined voice information by combining the reference characters input by the user, and confirming a control instruction;
after the control instruction is confirmed, recording an execution scheme set by a user;
and establishing an incidence relation between the control instruction and the execution scheme, and enabling a user to control the execution scheme through the control instruction.
2. The method of claim 1, wherein the step of training the customized voice message in conjunction with the reference text entered by the user and confirming the control command comprises:
acquiring reference characters input by a user;
comparing the converted text information with reference text input by a user to determine qualified voice information;
and packaging at least two pieces of qualified voice information to form the control instruction.
3. The method of claim 2, wherein after the step of packaging at least two pieces of qualified speech information to form the control command, further comprising:
judging whether the same control instruction exists or not; if yes, the control instruction is not added; and if the control instruction does not exist, confirming and adding the control instruction.
4. The custom voice control method according to claim 3, wherein in the step of comparing the converted text information with the reference text input by the user to determine the qualified voice information, it is determined whether the text matching degree between the reference text and the converted text information is greater than a first threshold; when the matching degree is smaller than the first threshold value, the user-defined voice information is determined to be unqualified, and the user needs to add the user again; and when the matching degree is greater than the first threshold value, marking the self-defined voice information with the reference characters and then carrying out associated recording.
5. The custom voice control method of claim 4, after the step of establishing an association of the control instruction with the execution scheme, further comprising:
and updating and storing the established association relation in a cloud server.
6. The method for controlling customized voice according to claim 5, wherein the step of performing update storage on the established association relationship in a cloud server comprises:
judging whether the execution scheme has a starting instruction or not;
if the execution scheme has the corresponding starting instruction, adding and storing the control instruction in a part of the starting instruction in the cloud server; and if the execution scheme does not have a corresponding starting instruction, updating and storing the control instruction and the execution scheme in the cloud server.
7. The customized voice control method according to any of claims 1-6, further comprising, before the step of obtaining the voice information added by the user:
and determining the identity of the user, and triggering an instruction adding function through a set triggering condition.
8. The custom voice control method of claim 7, wherein: confirming the identity of the user through voiceprint recognition; the trigger instruction adding function is confirmed by detecting a voice keyword.
9. A computer device comprising a processor and a memory, the processor coupled to the memory, the processor in operation executing instructions to implement the custom voice control method of any of claims 1-8.
10. A readable storage medium having stored thereon a computer program for execution by a processor to implement the method of customized voice control according to any of claims 1-8.
CN201911188742.0A 2019-11-28 2019-11-28 User-defined voice control method, computer equipment and readable storage medium Pending CN111128136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911188742.0A CN111128136A (en) 2019-11-28 2019-11-28 User-defined voice control method, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911188742.0A CN111128136A (en) 2019-11-28 2019-11-28 User-defined voice control method, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111128136A true CN111128136A (en) 2020-05-08

Family

ID=70496985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911188742.0A Pending CN111128136A (en) 2019-11-28 2019-11-28 User-defined voice control method, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111128136A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842306A (en) * 2012-08-31 2012-12-26 深圳Tcl新技术有限公司 Voice control method and device as well as voice response method and device
CN105404161A (en) * 2015-11-02 2016-03-16 百度在线网络技术(北京)有限公司 Intelligent voice interaction method and device
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system
CN105989841A (en) * 2015-02-17 2016-10-05 上海汽车集团股份有限公司 Vehicle-mounted speech control method and device
CN108172223A (en) * 2017-12-14 2018-06-15 深圳市欧瑞博科技有限公司 Voice instruction recognition method, device and server and computer readable storage medium
CN109785834A (en) * 2019-01-24 2019-05-21 中国—东盟信息港股份有限公司 A kind of voice data sample acquisition system and its method based on identifying code
CN109887497A (en) * 2019-04-12 2019-06-14 北京百度网讯科技有限公司 Modeling method, device and the equipment of speech recognition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842306A (en) * 2012-08-31 2012-12-26 深圳Tcl新技术有限公司 Voice control method and device as well as voice response method and device
CN105989841A (en) * 2015-02-17 2016-10-05 上海汽车集团股份有限公司 Vehicle-mounted speech control method and device
CN105404161A (en) * 2015-11-02 2016-03-16 百度在线网络技术(北京)有限公司 Intelligent voice interaction method and device
CN105679315A (en) * 2016-03-22 2016-06-15 谢奇 Voice-activated and voice-programmed control method and control system
CN108172223A (en) * 2017-12-14 2018-06-15 深圳市欧瑞博科技有限公司 Voice instruction recognition method, device and server and computer readable storage medium
CN109785834A (en) * 2019-01-24 2019-05-21 中国—东盟信息港股份有限公司 A kind of voice data sample acquisition system and its method based on identifying code
CN109887497A (en) * 2019-04-12 2019-06-14 北京百度网讯科技有限公司 Modeling method, device and the equipment of speech recognition

Similar Documents

Publication Publication Date Title
CN107112014B (en) Application focus in speech-based systems
JP7086521B2 (en) Information processing method and information processing equipment
CN105723360A (en) Improving natural language interactions using emotional modulation
US11862153B1 (en) System for recognizing and responding to environmental noises
JP6306528B2 (en) Acoustic model learning support device and acoustic model learning support method
JP5834291B2 (en) Voice recognition device, automatic response method, and automatic response program
CN109065045A (en) Audio recognition method, device, electronic equipment and computer readable storage medium
JP2014038132A (en) Information processor, program, and information processing method
KR101950387B1 (en) Method, computer device and computer readable recording medium for building or updating knowledgebase models for interactive ai agent systen, by labeling identifiable but not-learnable data in training data set
CN111429917B (en) Equipment awakening method and terminal equipment
US10216732B2 (en) Information presentation method, non-transitory recording medium storing thereon computer program, and information presentation system
JPWO2018043138A1 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
JP6856115B2 (en) Information processing method and information processing equipment
JP6254504B2 (en) Search server and search method
CN107767862B (en) Voice data processing method, system and storage medium
CN104038637B (en) Ringtone playing method and device and mobile terminal
KR20230005400A (en) Text or voice communication using standard utterances
CN111128136A (en) User-defined voice control method, computer equipment and readable storage medium
WO2022143349A1 (en) Method and device for determining user intent
CN113920996A (en) Voice interaction processing method and device, electronic equipment and storage medium
KR20190070682A (en) System and method for constructing and providing lecture contents
WO2020188622A1 (en) Editing support program, editing support method, and editing support device
WO2020031292A1 (en) Voice ai model switching system, voice ai model switching method, and program
WO2018179227A1 (en) Telephone answering machine text providing system, telephone answering machine text providing method, and program
JP7230085B2 (en) Method and device, electronic device, storage medium and computer program for processing sound

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200508

WD01 Invention patent application deemed withdrawn after publication