CN108010518B - Voice acquisition method, system and storage medium of voice interaction equipment - Google Patents
Voice acquisition method, system and storage medium of voice interaction equipment Download PDFInfo
- Publication number
- CN108010518B CN108010518B CN201711324790.9A CN201711324790A CN108010518B CN 108010518 B CN108010518 B CN 108010518B CN 201711324790 A CN201711324790 A CN 201711324790A CN 108010518 B CN108010518 B CN 108010518B
- Authority
- CN
- China
- Prior art keywords
- voice
- server
- voice interaction
- voice data
- acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 238
- 238000000034 method Methods 0.000 title claims abstract description 57
- 241001672694 Citrus reticulata Species 0.000 claims description 81
- 238000012545 processing Methods 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000013480 data collection Methods 0.000 abstract description 5
- 230000010365 information processing Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 13
- 238000009826 distribution Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 239000000203 mixture Substances 0.000 description 9
- 230000001419 dependent effect Effects 0.000 description 7
- 238000012797 qualification Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 241000221095 Simmondsia Species 0.000 description 1
- 235000004433 Simmondsia californica Nutrition 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0638—Interactive procedures
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Telephonic Communication Services (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The embodiment of the invention discloses a voice acquisition method, a system and a storage medium of voice interaction equipment, which are applied to the technical field of information processing. In the method of the embodiment, when the voice recognition database in the server is optimized, the voice data can be collected through the voice interaction equipment, and the collected voice data is automatically classified by the server, so that the voice collection can be realized only through the voice interaction equipment without needing to be carried out in a specific field, and the voice data collection is facilitated; and because the voice interaction equipment can directly start the voice acquisition interface to enter the acquisition of the next voice data when acquiring a plurality of pieces of voice data, the voice interaction equipment does not need to wake up the voice interaction equipment again to acquire the next voice data, so that the efficiency of acquiring the voice data is higher.
Description
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and a system for acquiring a voice of a voice interaction device, and a storage medium.
Background
In the field of voice recognition, user voice is mainly recognized on line through an acoustic model and a voice model in a database, the acoustic model and the voice model in the database are obtained through training a large amount of collected voice, and voice data need to be collected when the database needs to be optimized. In a conventional voice collecting method, voice collection is performed through a voice interaction device, and specifically, when the voice interaction device wakes up and collects a piece of voice data, the voice interaction device may perform a certain reaction, such as playing music or reading characters, so that if another piece of voice data needs to be collected, the voice interaction device needs to be awakened again.
However, since each dialect in China is relatively complex, for example, there are many branches in the Sichuan dialect, in order to optimize the database, a large amount of voices of each dialect need to be collected, and if a traditional voice collection method is used, voice interaction equipment needs to be awakened every time, so that the collection efficiency is low.
Therefore, if voice data needs to be collected in order to optimize the database, a third-party recording company is relied on, and after voice data are collected on recording equipment in the recording company field, a background user needs to store the collected voice data into a database of corresponding dialect mandarin or standard mandarin, so that the period of sound collection is longer, and the cost is increased accordingly.
Disclosure of Invention
The embodiment of the invention provides a voice acquisition method, a system and a storage medium of voice interaction equipment, which realize that after voice data of a user operation instruction is acquired, a voice acquisition interface is directly started to acquire the next voice data.
A first aspect of an embodiment of the present invention provides a voice acquisition method for a voice interaction device, including:
after the voice interaction equipment is awakened, when a voice acquisition instruction is received, determining whether the voice interaction equipment has acquisition qualification;
if the voice interaction equipment has the collection authority, the voice interaction equipment acquires a user operation instruction from the server, wherein the user operation instruction comprises the content of the voice to be collected;
the voice interaction equipment outputs the user operation instruction and starts a voice acquisition interface of the voice interaction equipment;
the voice interaction equipment acquires a piece of voice data corresponding to the user operation instruction from the voice acquisition interface;
the voice interaction equipment sends the collected voice data to the server so that the server can classify the voice data according to preset acoustic models of various types;
after the voice interaction device collects the voice data, aiming at another user operation instruction, the steps of obtaining the another user operation instruction from the server, outputting the another user operation instruction, starting a voice collection interface, collecting another voice data and sending the another voice data to the server are executed.
A second aspect of the embodiments of the present invention provides a method for acquiring a voice of a voice interaction device, including:
if the voice interaction equipment has the collection qualification, the server sends a user operation instruction to the voice interaction equipment, wherein the user operation instruction comprises the content of the voice to be collected;
the server receives voice data collected by the voice interaction equipment according to the user operation instruction;
the server determines initial acoustic models corresponding to a plurality of initial primitives included in the acquired voice data respectively to obtain a plurality of initial acoustic models;
the server calculates the acoustic distance between each initial acoustic model and a preset acoustic model of standard Mandarin;
and if the acoustic distance corresponding to at least one initial acoustic model is larger than a first threshold and smaller than a second threshold, merging the preset acoustic model of the standard mandarin with the plurality of initial acoustic models based on relevant primitives, taking the merged acoustic model as a final acoustic model of the collected voice data, and determining a mandarin type of one dialect to which the collected voice data belongs according to the final acoustic model and the preset acoustic models of the dialect mandarin.
A third aspect of an embodiment of the present invention provides a voice interaction device, including:
the authority determining unit is used for determining whether the voice interaction equipment has the acquisition authority or not when receiving a voice acquisition command after the voice interaction equipment is awakened, and informing the command acquiring unit to acquire a user operation command if the voice interaction equipment has the acquisition authority;
the instruction acquisition unit is used for acquiring a user operation instruction from the server and outputting the user operation instruction if the voice interaction equipment has the acquisition authority, wherein the user operation instruction comprises the content of voice to be acquired;
the acquisition unit is used for starting a voice acquisition interface and acquiring a piece of voice data corresponding to the user operation instruction from the voice acquisition interface;
the acquisition and transmission unit is used for transmitting the acquired voice data to the server so that the server can classify the acquired voice data according to preset acoustic models of various types;
and the acquisition unit is also used for triggering the instruction acquisition unit to acquire and output another user operation instruction after acquiring the voice data.
A fourth aspect of the embodiments of the present invention provides a server, including:
the instruction sending unit is used for sending a user operation instruction to the voice interaction equipment if the voice interaction equipment has the collection qualification, wherein the user operation instruction comprises the content of voice to be collected;
the acquisition receiving unit is used for receiving voice data acquired by the voice interaction equipment according to the user operation instruction;
the model determining unit is used for determining initial acoustic models corresponding to a plurality of initial primitives included in the acquired voice data to obtain a plurality of initial acoustic models;
the distance calculation unit is used for calculating the acoustic distance between each initial acoustic model and a preset acoustic model of the standard mandarin;
and the distance processing unit is used for merging the preset acoustic model of the standard mandarin with a plurality of initial acoustic models based on relevant primitives if the acoustic distance corresponding to at least one initial acoustic model is greater than a first threshold and smaller than a second threshold, taking the merged acoustic model as a final acoustic model of the collected voice data, and determining a mandarin type of a certain dialect to which the collected voice data belongs according to the final acoustic model and the preset acoustic models of the dialect mandarin.
A fifth aspect of the embodiments of the present invention provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executed by a voice acquisition method of a voice interaction device according to the first aspect or the second aspect of the embodiments of the present invention.
A sixth aspect of the present embodiment provides a terminal device, including a processor and a storage medium, where the processor is configured to implement each instruction;
the storage medium is configured to store a plurality of instructions, where the instructions are configured to be loaded by a processor and to execute the voice collecting method of the voice interaction device according to the first aspect of the embodiment of the present invention.
A seventh aspect of the embodiments of the present invention provides a voice collecting system, including a voice interaction device and a server, where the voice interaction device is the voice interaction device according to the third aspect of the embodiments of the present invention; the server is the server according to the fourth aspect of the embodiment of the present invention.
Therefore, in the method of the embodiment, when the voice recognition database in the server is optimized, the voice data can be acquired through the voice interaction device, and the acquired voice data is automatically classified and stored by the server, so that the voice acquisition can be realized only through the voice interaction device without the need of going to a specific site, and the voice data acquisition is facilitated; and because the voice interaction equipment finishes certain voice data when collecting a plurality of pieces of voice data, the voice interaction equipment can directly start the voice collecting interface to enter the next piece of voice data collection without waking up the voice interaction equipment again to collect the next piece of voice data, so that the efficiency of collecting the voice data is higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a scene to which a voice acquisition method of a voice interaction device according to an embodiment of the present invention is applied;
FIG. 2 is a flowchart of a voice collecting method of a voice interaction device according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method of a server during a voice capture process of a voice interaction device according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating voice capture by a voice interaction device in an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a voice interaction device provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
An embodiment of the present invention provides a voice collecting method for voice interaction devices, which is mainly applicable to a scenario shown in fig. 1, where the scenario includes a server, at least one client (one is illustrated in fig. 1) and a plurality of voice interaction devices (n are illustrated in fig. 1).
The server can push information of voice collection activities to at least one client in a plurality of modes in advance, the client displays the information of the voice collection activities, the information of the voice collection activities comprises the information for collecting voice to each user, and the server can further comprise a starting interface for inputting user information. Therefore, if the user wants to respond to the voice collection activity and carry out voice collection, the starting interface for inputting the user information can be clicked, the client can display the input interface of the user information, and the user inputs the voice collection entry information of the user into the input interface and uploads the voice collection entry information to the server for storage through the input interface.
The manner in which the server pushes the information of the voice solicitation activity may specifically include: the server sends a certain active webpage to a certain client, and the starting interface for inputting the user information included in the active webpage is the link information of the input interface or the information such as the two-dimensional code corresponding to the link information of the input interface. The voice collection entry information input into the input interface by the user may include user information (such as a user identifier) bound by the voice interaction device, a dialect, a region, an age, and the like of the user.
After the user uploads the voice collection entry information to the server through the input interface, voice data can be collected through any one of the voice interaction devices, specifically, for the voice interaction device:
after the voice interaction equipment is awakened, when a voice acquisition instruction is received, whether the voice interaction equipment has acquisition qualification is determined, if the voice interaction equipment corresponds to a user and has acquisition authority, the voice interaction equipment acquires a user operation instruction from a server, wherein the user operation instruction comprises the content of the voice to be acquired; then the voice interaction equipment outputs the user operation instruction and starts a voice acquisition interface; the voice interaction equipment acquires a piece of voice data corresponding to a user operation instruction from the voice acquisition interface and then sends the acquired piece of voice data to the server; and after the voice interaction equipment acquires one piece of voice data, aiming at the other user operation instruction, the steps of acquiring the other user operation instruction from the server, outputting the other user operation instruction, starting the voice acquisition interface, acquiring the other piece of voice data and sending the other piece of voice data to the server are executed.
For the server, if the voice interaction equipment has the collection qualification, the server sends a user operation instruction to the voice interaction equipment, and the user operation instruction comprises the content of the voice to be collected; when the server receives voice data collected by the voice interaction equipment according to a user operation instruction, determining initial acoustic models respectively corresponding to a plurality of initial primitives included in the collected voice data to obtain a plurality of initial acoustic models; calculating the acoustic distance between each initial acoustic model and a preset acoustic model of standard Mandarin; and if the acoustic distance corresponding to at least one initial acoustic model is larger than a first threshold and smaller than a second threshold, merging the preset acoustic model of the standard mandarin with the plurality of initial acoustic models based on the relevant primitives, taking the merged acoustic model as a final acoustic model of the collected voice data, and determining a mandarin type of a certain dialect to which the collected voice data belongs according to the final acoustic model and the preset acoustic models of the dialect mandarin.
Therefore, when the database of voice recognition is optimized, the voice recognition can be realized by voice acquisition through the voice interaction equipment without going to a specific field, so that the voice data can be conveniently acquired; when the voice interaction device collects a plurality of pieces of voice data, after a certain piece of voice data is collected according to a user operation instruction, the voice interaction device can directly obtain another user operation instruction, and starts the voice collection interface to enter the next piece of voice data collection, and does not need to wake up the voice interaction device again to collect the next piece of voice data, so that the efficiency of collecting the voice data is high.
An embodiment of the present invention provides a voice collecting method for a voice interaction device, which is a method executed by the voice interaction device, and a flowchart is shown in fig. 2, where the method includes:
It is understood that the voice interaction device is a device that recognizes a user voice and performs user feedback mainly based on the recognized user voice, such as an intelligent robot. Any voice interaction device can log in the server through the bound user information, and the user information bound by each voice interaction device, such as the corresponding relation between a user account (i.e. user identification) and a password, is stored in the server.
In this embodiment, when the user performs voice acquisition through the voice interaction device, the wake-up instruction may be input to the voice interaction device, and when the voice interaction device recognizes the wake-up instruction of the user, the voice interaction device may be converted from the sleep state to the working state, and when the voice acquisition instruction of the user is recognized, the process of this embodiment is triggered and executed. For example, when the user inputs a "hello" voice to the voice interaction device, the voice interaction device recognizes the wake-up command, and when the user inputs a "start to collect voice" voice to the voice interaction device, the voice interaction device recognizes the voice collection command.
When determining whether the voice interaction device has the collection right, the voice interaction device may send a query request to the server, where the query request includes a user identifier of the voice interaction device; the server inquires whether to store the voice acquisition registration information corresponding to the user identification according to the inquiry request and returns an inquiry result to the voice interaction equipment; and if the query result received by the voice interaction equipment is that the server stores the voice acquisition entry information corresponding to the user identifier, determining that the voice interaction equipment has the acquisition permission, and if the query result received by the voice interaction equipment is that the server does not store the voice acquisition entry information corresponding to the user identifier, determining that the voice interaction equipment does not have the acquisition permission.
The user operation instruction obtained by the voice interaction device refers to an instruction that requires a user to perform certain operation, and may specifically include content of a voice to be collected, for example, the content included in the user operation instruction may be "how the weather is this year" and "i like jojoba's rice aroma" and the like.
It should be noted that, in the voice collecting activity, the content of the voice to be collected is more, and the voice to be collected needs to be sent to the voice interaction device through a plurality of user operation instructions.
And 103, outputting the user operation instruction obtained in the step 102, starting a voice acquisition interface, and acquiring voice data corresponding to the user operation instruction from the voice acquisition interface by using the voice interaction equipment.
Specifically, if the voice interaction device includes a display screen, when the voice interaction device outputs a user operation instruction, a user operation instruction may be displayed on the screen included in the voice interaction device, for example, "please read an x with respect to the voice interaction device" is displayed on the screen; if the voice interaction device does not include the display screen but includes the playing device, when the voice interaction device outputs a user operation instruction, the user operation instruction can be played through the playing device included in the voice interaction device, for example, a voice of "please read all over the voice interaction device" is played; if the voice interaction device does not include the display screen and is connected with the printing device through the printing interface, when the voice interaction device outputs a user operation instruction, the user operation instruction can be printed through the printing interface, for example, printing "please read with respect to the voice interaction device.
When the voice interaction device starts the voice acquisition interface, a microphone of the voice interaction device can be started. Therefore, the user can aim at the microphone of the voice interaction device to read the content of the voice to be collected in the user operation instruction, and the voice interaction device can collect a piece of voice data corresponding to the user operation instruction.
And 104, the voice interaction device sends the voice data acquired in the step 103 to the server so that the server classifies the voice data according to preset acoustic models of various types, and mainly classifies the voice data received by the server into standard Mandarin or any dialect Mandarin type and stores the voice data.
It should be noted that, after the voice interaction device acquires a piece of voice data, the steps 102 to 104 may be automatically returned to be executed, that is, another user operation instruction is acquired, and the step of outputting another user operation instruction, starting the voice acquisition interface, acquiring another piece of voice data, and sending the another piece of voice data to the server is executed according to the another user operation instruction. In this way, the steps 102 to 104 are repeatedly executed until the voice interaction device respectively collects corresponding voice data for all the contents of the voice to be collected in the voice collecting activity, that is, a plurality of user operation instructions, and then sends the voice data to the server.
It should be noted that, when the voice interaction device acquires all the contents of the voice to be acquired in the voice collection, that is, multiple user operation instructions, and then respectively acquires corresponding voice data, and sends the voice data to the server, and when the server receives the voice data respectively corresponding to the multiple user operation instructions sent by the voice interaction device, the server sends reward information to the terminal device corresponding to the user of the voice interaction device. The reward information may be information of a coupon, etc.
Therefore, in the method of the embodiment, when the voice recognition database in the server is optimized, the voice data can be acquired through the voice interaction device, and the acquired voice data is automatically classified and stored by the server, so that the voice acquisition can be realized only through the voice interaction device without the need of going to a specific site, and the voice data acquisition is facilitated; and because the voice interaction equipment can directly start the voice acquisition interface to enter the acquisition of the next voice data when acquiring a plurality of pieces of voice data, the voice interaction equipment does not need to wake up the voice interaction equipment again to acquire the next voice data, so that the efficiency of acquiring the voice data is higher.
In a specific embodiment, since the voice interaction device may start the voice collecting interface immediately after collecting one piece of voice data for one user operation instruction, the other piece of voice data may be collected for the other user operation instruction, and thus, a time interval between the collection of the two pieces of voice data is relatively short, so that in some cases, the voice interaction device has collected the other piece of voice data before the step 104 is executed for one piece of voice data.
Thus, a sending cache is arranged in the voice interaction device and used for caching the voice data sent to the server. Therefore, after the voice interaction device collects a piece of voice data, the voice data is stored in the sending cache, and the voice interaction device sends the voice data stored in the sending cache to the server in sequence. Therefore, when the voice interaction device executes the step 104, a piece of voice data corresponding to the one user operation instruction is stored in the sending cache, and when the sending cycle of the piece of voice data is reached, the piece of voice data in the sending cache is sent to the server.
Referring to fig. 3, in another specific embodiment, for the server in the scenario shown in fig. 1, the voice collecting method may be implemented by the following embodiments, specifically including:
It can be understood that the server may store the voice collecting registration information of multiple users, and when a certain user collects voice data through the voice interaction device, the voice interaction device may first determine whether the voice interaction device has a collection authority to the server.
Specifically, the voice interaction device sends a query request to the server, and when the server receives the query request sent by the voice interaction device, the query request includes a user identifier of the voice interaction device; the server inquires whether the voice acquisition registration information corresponding to the user identification is stored in the server or not, and returns an inquiry result to the voice interaction equipment. And if the query result is that the server stores the voice acquisition registration information corresponding to the user identifier, determining that the voice interaction equipment has the acquisition permission.
Further, if the server determines that the voice interaction device has the collection authority, a user operation instruction is sent to the voice interaction device, and the voice interaction device collects voice data according to the method of the above steps 102 to 104 and sends the voice data to the server.
Further, in practical applications, the content of the voice to be collected in one voice gathering activity is more, the content of the voice to be collected needs to be sent to the voice interaction device through a plurality of user operation instructions, and when the server sends the plurality of user operation instructions to the voice interaction device and receives the voice data respectively corresponding to the plurality of user operation instructions collected by the voice interaction device, the server can also send reward information to the terminal device corresponding to the user of the voice interaction device. The terminal device may be the same as or different from the voice interaction device described above.
It is understood that the voice data is composed of a plurality of primitives (i.e. initial primitives), for example, an initial primitive may be an initial b, or a final an, etc.; the initial acoustic model is used to describe the probability of identifying a syllable in any implicit state from the observed speech information, that is, the probability of identifying a syllable in any implicit state from each initial primitive in the speech data received from the server, so that each initial primitive corresponds to one initial acoustic model, specifically:
the initial acoustic Model of an initial primitive may be a Hidden Markov Model (HMM) Model, which is mainly represented by a mixture of multiple gaussian distributions.
In step 204, the server calculates the acoustic distance between each initial acoustic model and a preset acoustic model of standard mandarin chinese.
The preset acoustic model of the standard mandarin is stored in advance by the server, is obtained by training according to a large amount of voice data, and can be specifically an HMM model related to the context; the acoustic distance may be used to measure the similarity between the primitive of mandarin chinese and the initial primitive of the voice data, and may specifically be an asymmetric mahalanobis distance or the like.
In this step, the server needs to calculate an acoustic distance between the initial acoustic model corresponding to each initial primitive and the acoustic model of each primitive of the standard mandarin, so that a plurality of acoustic distances can be calculated for a certain initial acoustic model, and the server can use the minimum acoustic distance in the plurality of acoustic distances as the acoustic distance between the initial acoustic model and the acoustic model of the standard mandarin.
For example, the acoustic distance between a certain initial acoustic model and an acoustic model of the standard mandarin chinese can be represented by the following equation 1:
in formula 1, λ i And λ j An acoustic model and an initial acoustic model, respectively, of standard mandarin; m and N are the number of Gaussian mixtures included in the acoustic model and the initial acoustic model of the standard Mandarin, respectively, and K represents the number of states in the initial acoustic model;andweights for Gaussian mixture distributions in the acoustic model and the initial acoustic model of Standard Mandarin, respectively;represents a Gaussian mixture distribution m i,k Mixed with Gaussian distribution n j, k is calculated from the measured distance.
And the distance between any two gaussian mixture distributions i and j can be expressed by the following formula 2:
in the above formula, μ and Σ represent the mean and variance of the gaussian mixture distribution, respectively.
And step 207, merging a preset acoustic model of the standard mandarin with the plurality of initial acoustic models obtained in the step 203 based on the relevant primitives, and taking the merged acoustic model as a final acoustic model of the acquired voice data. And then determining a certain dialect mandarin type to which the collected voice data belongs according to the final acoustic model and preset acoustic models of the dialect mandarin.
In this case, one context-dependent HMM model corresponds to one context-dependent cell, for example, one context-dependent cell is x-an +, which represents that the center cell is a vowel an and the cells related to the cells before and after the center cell are, for example, b-an + d, l-an + d, and so on. And the initial acoustic model is a context-free HMM model, one context-free HMM model corresponds to one initial primitive included in the speech data.
Thus, when the server performs the merging in this step, if a certain initial primitive included in the voice data is related to a certain context-dependent primitive of mandarin chinese standard, that is, the center primitive of the context-dependent primitive is the same, the context-free HMM model of the initial primitive and the context-dependent HMM model of the mandarin chinese standard are merged on the premise of the same state. In the merged HMM model expression, a plurality of gaussian distributions included in the same state are derived from the above-described speech data and the standard mandarin chinese, respectively. Wherein the same state refers to syllables all having the same implicit state.
For example: the speech data includes an initial primitive an, and a context-dependent primitive of mandarin chinese standard is an x-an +, then the HMM models corresponding to the two primitives are merged under the same state.
For example, the merged HMM model may be represented by an output probability density function of each state, and the output probability density function of one state may be specifically represented by the following formula 3:
in equation 3, x represents an input feature vector,andrespectively representing the ith state of the acoustic model of the standard Mandarin and the initial acoustic model; k and N represent the number of gaussian mixture distributions included in the ith state and M represents the number of states of the initial acoustic model participating in the merging, respectively, in the acoustic model of the standard mandarin chinese language and the initial acoustic model described above; λ denotes the insertion factor, w ik Represents the weight of the kth gaussian mixture distribution of the ith state, and N (-) represents the gaussian mixture distribution.
When determining a certain dialect mandarin type to which the voice data belongs according to the final acoustic model and preset acoustic models of the dialect mandarin, the server may calculate acoustic distances between the final acoustic model and the acoustic models of the dialect mandarin, respectively, and if the acoustic distance between the final acoustic model and the acoustic model of the certain dialect mandarin is the minimum, the voice data is determined as the dialect mandarin type, and the voice data may be stored in a database of the dialect mandarin type.
And 208, re-determining a plurality of final primitives included in the acquired voice data, wherein the final primitives include the initial primitives and the newly added primitives, and determining a certain dialect mandarin type or standard mandarin type to which the acquired voice data belongs according to final acoustic models corresponding to the final primitives and preset standard mandarin and acoustic models of all dialect mandarins.
Specifically, the server adds at least one new primitive and re-determines a final primitive included in the collected voice data. For example, the initial primitive obtained by step 203 is ABCD, and the final primitive obtained by step 208 is ABCDEF, etc., where EF is the new primitive.
And then the server determines final acoustic models corresponding to the final primitives respectively, calculates acoustic distances between the final acoustic models and the acoustic models of the dialect Mandarin and the standard Mandarin respectively, and determines the collected voice data as the dialect Mandarin type or the standard Mandarin type if the acoustic distances between the final acoustic models and the acoustic models of the dialect Mandarin or the standard Mandarin are minimum.
A specific embodiment is described below to describe the voice collecting method of the present invention, where the method of this embodiment may be applied to the scene shown in fig. 1, and the voice interaction device is specifically a robot, the robot has a display screen, and the method of this embodiment specifically includes the following parts, and a schematic diagram is shown in fig. 4:
(1) server pushing information of voice collection activity
11, the server pushes information of voice solicitation activity to a certain client (such as qq client) through a certain active web page, which includes a launch interface for inputting user information.
12, if the user wants to respond to the voice collection activity, the starting interface can be clicked, so that the client can start the webpage client, an input interface of user information is displayed through the webpage client, the user inputs voice collection registration information into the input interface, and the voice collection registration information input by the user is uploaded to a server through the input interface for storage; the server returns confirmation information to the webpage client.
The voice collection entry information may include user information (such as a user identifier) bound to a robot of the user, a dialect of the user, a region, an age, and other information.
(2) User collecting voice through robot
A user wakes up the robot through a wake-up instruction and enables the robot to send a query request to a server through a voice acquisition instruction, wherein the query request comprises a user identifier of the robot; the server inquires whether the voice acquisition registration information corresponding to the user identification is stored in the server according to the inquiry request, and returns an inquiry result to the robot; and the robot determines whether the robot has the collection authority or not according to the query result.
And 22, if the robot determines that the corresponding user has the collection authority, the server sends the content of the voice to be collected corresponding to the voice collection activity to the robot one by one through a plurality of user operation instructions.
And 23, displaying the user operation instructions through a display screen by the robot, and acquiring corresponding voice data according to the user operation instructions respectively. Specifically, the robot displays a user operation instruction each time, starts the voice acquisition interface, acquires voice data corresponding to the user operation instruction through the voice acquisition interface, and sends the acquired voice data to the server after acquiring the voice data corresponding to the user operation instruction.
After the robot collects the voice data corresponding to one user operation instruction, the other user operation instruction is automatically obtained, the voice collection interface is started, and the voice data of the other user operation instruction is collected.
(3) The server classifies the collected voice data, and specifically, the server may determine a type (mandarin in a certain dialect or standard mandarin) to which each piece of collected voice data belongs according to the above-described flow illustrated in fig. 3, and store the piece of voice data in a database of the corresponding type.
(4) After the user collects the voice data corresponding to the user operation instructions through the robot, the server can send reward information to the terminal device corresponding to the user of the robot.
An embodiment of the present invention further provides a voice interaction device, such as a robot, whose schematic structural diagram is shown in fig. 5, which may specifically include:
the authority determining unit 10 is configured to determine whether the voice interaction device has an acquisition authority or not when receiving a voice acquisition instruction after the voice interaction device is awakened, and notify the instruction acquiring unit 11 to acquire a user operation instruction if the voice interaction device has the acquisition authority.
The permission determining unit 10 sends a query request to a server by a specific user, where the query request includes a user identifier of the voice interaction device, so that the server queries whether to store voice acquisition entry information corresponding to the user identifier according to the query request, and returns a query result to the voice interaction device; and if the query result received by the voice interaction equipment is that the server stores the voice acquisition registration information corresponding to the user identification, determining that the voice interaction equipment has acquisition permission.
The instruction obtaining unit 11 is configured to obtain a user operation instruction from a server and output the user operation instruction if the voice interaction device has a collection authority, where the user operation instruction includes content of a voice to be collected;
the acquisition unit 12 is configured to start a voice acquisition interface, and acquire, through the voice acquisition interface, a piece of voice data corresponding to one user operation instruction output by the instruction acquisition unit 11;
and the acquisition and sending unit 13 is configured to send a piece of voice data acquired by the acquisition unit 12 to the server, so that the server classifies the piece of voice data according to preset acoustic models of various types.
And when the acquisition unit 12 acquires the piece of voice data, triggering the instruction acquisition unit 11 to acquire the other user operation instruction, executing the voice acquisition starting interface by the acquisition unit 12 aiming at the other user operation instruction, acquiring the other voice data corresponding to the other user operation instruction, and sending the other voice data to the server by the acquisition and sending unit 13.
In a specific embodiment, the voice interaction device may further include a setting unit 14 configured to set a sending buffer, where the sending buffer is used to buffer the voice data sent to the server. When the acquisition unit 12 acquires a piece of voice data, the acquired piece of voice data is stored in the sending buffer of the voice interaction device set by the setting unit 14, and when the sending cycle of the piece of voice data arrives, the acquisition and sending unit 13 sends the piece of voice data in the sending buffer to the server.
It can be seen that, in the device of this embodiment, when the speech recognition database in the server is optimized, the collection unit 12 in the speech interaction device can collect the speech data, the collection and transmission unit 13 sends the collected speech data to the server, and the server automatically classifies the collected speech data, so that the speech collection can be performed without going to a specific site, and the speech data collection can be realized only by the speech interaction device, which is convenient for the speech data collection; and because when the voice interaction device collects a plurality of pieces of voice data, the collection unit 12 finishes collecting a certain piece of voice data, the voice interaction device can directly start the voice collection interface to enter the collection of the next piece of voice data, and does not need to wake up the voice interaction device again to collect the next piece of voice data, so that the efficiency of collecting the voice data is higher.
An embodiment of the present invention further provides a server, a schematic structural diagram of which is shown in fig. 6, and the server specifically includes:
the instruction sending unit 21 is configured to send a user operation instruction to the voice interaction device if the voice interaction device has a collection authority, where the user operation instruction includes content of a voice to be collected.
Further, the instruction sending unit 21 may be further configured to receive an inquiry request sent by the voice interaction device, where the inquiry request includes a user identifier of the voice interaction device; inquiring whether to store the voice acquisition registration information corresponding to the user identification and returning an inquiry result to the voice interaction equipment; and if the query result is that the server stores the voice acquisition entry information corresponding to the user identifier, determining that the voice interaction device has acquisition permission.
The collecting and receiving unit 22 is configured to receive voice data collected by a voice interaction device with a collection authority, and specifically receive voice data corresponding to a user operation instruction sent by the instruction sending unit 21 and collected by the voice interaction device.
The model determining unit 23 is configured to determine initial acoustic models corresponding to a plurality of initial primitives included in the voice data received by the collecting and receiving unit 22, so as to obtain a plurality of initial acoustic models.
A distance calculating unit 24, configured to calculate acoustic distances between the initial acoustic models determined by the model determining unit 23 and preset acoustic models of mandarin standard.
The distance calculating unit 24 is specifically configured to calculate acoustic distances between the initial acoustic models corresponding to the initial primitives and the acoustic models of the primitives of the standard mandarin chinese language, respectively; and taking the smallest acoustic distance from a plurality of acoustic distances obtained by aiming at a certain initial acoustic model as the acoustic distance between the certain initial acoustic model and the acoustic model of the standard Mandarin.
A distance processing unit 25, configured to, if the acoustic distance corresponding to at least one initial acoustic model calculated by the distance calculation unit 24 is greater than a first threshold and smaller than a second threshold, merge the preset acoustic model of standard mandarin with the plurality of initial acoustic models based on the relevant primitives, use the merged acoustic model as a final acoustic model of the voice data, and determine a mandarin type of one of the dialects to which the acquired voice data belongs according to the final acoustic model and preset acoustic models of dialects of mandarin.
Further, the distance processing unit 25 is further configured to determine that the collected voice data belongs to a standard mandarin type if the acoustic distance corresponding to each initial acoustic model is smaller than a first threshold; and if the acoustic distance corresponding to at least one initial acoustic model is greater than the second threshold value, re-determining a plurality of final primitives included in the voice data, wherein the final primitives include the initial primitives and the newly added primitives, and determining a certain dialect mandarin type or standard mandarin type to which the collected voice data belongs according to the final acoustic models corresponding to the final primitives, a preset standard mandarin and acoustic models of all dialect mandarins.
And the distance processing unit 25, after determining the type of a piece of voice data, stores the voice data in the voice database of the type.
In the server of this embodiment, the instruction sending unit 21 and the collecting and receiving unit 22 in the server may obtain the collected voice data through interaction with the voice interaction device, and the model determining unit 23, the distance calculating unit 24, and the distance processing unit 25 automatically classify the collected voice data, so that voice collection may not be performed in a specific site, and only the voice interaction device is needed, which facilitates the collection of the voice data.
The present invention further provides a terminal device, a schematic structural diagram of which is shown in fig. 7, the terminal device may generate relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 30 (e.g., one or more processors), a memory 31, and one or more storage media 32 (e.g., one or more mass storage devices) for storing applications 321 or data 322. The memory 31 and the storage medium 32 may be, among other things, transient storage or persistent storage. The program stored in the storage medium 32 may include one or more modules (not shown), each of which may include a series of instruction operations for the terminal device. Still further, the central processor 30 may be configured to communicate with the storage medium 32 to execute a series of instruction operations in the storage medium 32 on the terminal device.
Specifically, the application 321 stored in the storage medium 32 includes an application for voice acquisition, and the application may include the authority determining unit 10, the instruction acquiring unit 11, the acquiring unit 12, the acquisition and sending unit 13, and the setting unit 14 in the voice interaction device, which is not described herein again. Further, the central processor 30 may be configured to communicate with the storage medium 32, and execute a series of operations corresponding to the application program for voice capturing stored in the storage medium 32 on the terminal device.
The terminal equipment may also include one or more power supplies 33, one or more wired or wireless network interfaces 34, one or more input-output interfaces 35, and/or one or more operating systems 323, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and the like.
The steps executed by the voice interaction device in the above-mentioned method embodiment may be based on the structure of the terminal device shown in fig. 7.
The embodiment of the present invention further provides a server, which has a structure similar to that of the terminal device shown in fig. 7, except that in the server of this embodiment, the application program stored in the storage medium includes an application program for acquiring voice, and the application program may include the instruction sending unit 21, the acquisition receiving unit 22, the model determining unit 23, the distance calculating unit 24, and the distance processing unit 25 in the server, which are not described herein again. Further, the central processor may be configured to communicate with the storage medium, and execute a series of operations corresponding to the application program for voice capturing stored in the storage medium on the server.
The embodiment of the present invention further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and being executed by the voice interaction device or the voice acquisition method of the voice interaction device executed by the server.
The embodiment of the invention also provides terminal equipment, which comprises a processor and a storage medium, wherein the processor is used for realizing each instruction;
the storage medium is used for storing a plurality of instructions, and the instructions are used for being loaded by the processor and executing the voice acquisition method of the voice interaction device executed by the voice interaction device.
The embodiment of the invention also provides a voice acquisition system, which comprises voice interaction equipment and a server, wherein the structure of the voice interaction equipment can be as that of the voice interaction equipment shown in figure 5; the structure of the server may be as shown in fig. 6, which is not described herein again.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The voice collecting method, the system and the storage medium of the voice interaction device provided by the embodiment of the invention are described in detail, a specific embodiment is applied in the text to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (16)
1. A voice acquisition method of a voice interaction device is characterized by comprising the following steps:
after the voice interaction equipment is awakened, when a voice acquisition instruction is received, whether the voice interaction equipment has acquisition authority is determined;
if the voice interaction equipment has the collection authority, the voice interaction equipment acquires a user operation instruction from the server, wherein the user operation instruction comprises the content of voice to be collected;
the voice interaction equipment outputs the user operation instruction and starts a voice acquisition interface of the voice interaction equipment;
the voice interaction equipment acquires a piece of voice data corresponding to the user operation instruction from the voice acquisition interface;
the voice interaction equipment sends the collected voice data to the server so that the server can classify the voice data according to preset acoustic models of various types;
after the voice interaction device collects the voice data, aiming at another user operation instruction, the voice interaction device executes the steps of obtaining the another user operation instruction from the server, outputting the another user operation instruction, starting a voice collection interface, collecting another voice data and sending the another voice data to the server.
2. The method of claim 1, wherein the determining whether the voice interaction device has capture permission comprises:
the voice interaction equipment sends a query request to the server, wherein the query request comprises a user identifier of the voice interaction equipment, so that the server can query whether to store voice acquisition registration information corresponding to the user identifier according to the query request and return a query result to the voice interaction equipment;
and if the query result received by the voice interaction equipment is that the server stores the voice acquisition registration information corresponding to the user identification, determining that the voice interaction equipment has acquisition permission.
3. The method of claim 1, wherein before the voice interaction device sends the collected piece of voice data to the server, the method further comprises:
the voice interaction equipment is provided with a sending cache, and the sending cache is used for caching the voice data sent to the server.
4. The method of claim 3, wherein the sending, by the voice interaction device, the collected piece of voice data to the server specifically comprises:
and the voice interaction equipment stores the acquired piece of voice data into a sending cache included by the voice interaction equipment, and sends the piece of voice data in the sending cache to the server when the sending cycle of the piece of voice data is reached.
5. A voice acquisition method of a voice interaction device, the method comprising:
if the voice interaction equipment has the collection authority, the server sends a user operation instruction to the voice interaction equipment, wherein the user operation instruction comprises the content of the voice to be collected;
the server receives voice data collected by the voice interaction equipment according to the user operation instruction;
the server determines initial acoustic models corresponding to a plurality of initial primitives included in the acquired voice data respectively to obtain a plurality of initial acoustic models;
the server calculates the acoustic distance between each initial acoustic model and a preset acoustic model of standard Mandarin;
and if the acoustic distance corresponding to at least one initial acoustic model is larger than a first threshold and smaller than a second threshold, merging the preset acoustic model of the standard mandarin with the plurality of initial acoustic models based on relevant primitives, taking the merged acoustic model as a final acoustic model of the collected voice data, and determining a mandarin type of one dialect to which the collected voice data belongs according to the final acoustic model and the preset acoustic models of the dialect mandarin.
6. The method of claim 5, wherein the server calculates the acoustic distance between each initial acoustic model and a preset acoustic model of standard Mandarin, and specifically comprises:
calculating the acoustic distance between the initial acoustic model corresponding to each initial primitive and the acoustic model of each primitive of the standard Mandarin;
and taking the smallest acoustic distance from a plurality of acoustic distances obtained by aiming at a certain initial acoustic model as the acoustic distance between the certain initial acoustic model and the acoustic model of the standard Mandarin.
7. The method of claim 5, wherein the method further comprises:
and if the acoustic distance corresponding to each initial acoustic model is smaller than the first threshold value, determining that the collected voice data belongs to the standard Mandarin type.
8. The method of claim 5, wherein the method further comprises:
and if the acoustic distance corresponding to at least one initial acoustic model is greater than the second threshold value, re-determining a plurality of final primitives included in the voice data, wherein the final primitives include the initial primitives and the newly added primitives, and determining a certain Mandarin type or standard Mandarin type to which the acquired voice data belongs according to the final acoustic models respectively corresponding to the final primitives and the preset standard Mandarin and the acoustic models of all dialects.
9. The method of any one of claims 5 to 8,
the initial acoustic model is a context-free hidden Markov model, and the preset acoustic model of standard Mandarin is a context-free hidden Markov model; the acoustic distance is an asymmetric mahalanobis distance.
10. The method of any of claims 5 to 8, wherein before the server sends the user operation instruction to the voice interaction device, the method further comprises:
the server receives a query request sent by the voice interaction equipment, wherein the query request comprises a user identifier of the voice interaction equipment;
the server inquires whether the voice acquisition registration information corresponding to the user identification is stored or not, and returns an inquiry result to the voice interaction equipment;
and if the query result is that the server stores the voice acquisition entry information corresponding to the user identifier, determining that the voice interaction device has acquisition permission.
11. The method according to any one of claims 5 to 8, wherein there are a plurality of said user operation instructions, the method further comprising:
the server receives voice data corresponding to a plurality of user operation instructions collected by the voice interaction equipment and sends reward information to terminal equipment corresponding to the user of the voice interaction equipment.
12. A voice interaction device, comprising:
the authority determining unit is used for determining whether the voice interaction equipment has the acquisition authority or not when receiving a voice acquisition command after the voice interaction equipment is awakened, and informing the command acquiring unit to acquire a user operation command if the voice interaction equipment has the acquisition authority;
the instruction acquisition unit is used for acquiring a user operation instruction from the server and outputting the user operation instruction if the voice interaction equipment has the acquisition authority, wherein the user operation instruction comprises the content of the voice to be acquired;
the acquisition unit is used for starting a voice acquisition interface and acquiring a piece of voice data corresponding to the user operation instruction from the voice acquisition interface;
the acquisition and transmission unit is used for transmitting the acquired voice data to the server so that the server can classify the acquired voice data according to preset acoustic models of various types;
and the acquisition unit is also used for triggering the instruction acquisition unit to acquire and output another user operation instruction after acquiring the voice data.
13. A server, comprising:
the instruction sending unit is used for sending a user operation instruction to the voice interaction equipment if the voice interaction equipment has the collection authority, wherein the user operation instruction comprises the content of the voice to be collected;
the acquisition receiving unit is used for receiving voice data acquired by the voice interaction equipment according to the user operation instruction;
the model determining unit is used for determining initial acoustic models corresponding to a plurality of initial primitives included in the acquired voice data to obtain a plurality of initial acoustic models;
the distance calculation unit is used for calculating the acoustic distance between each initial acoustic model and a preset acoustic model of the standard mandarin;
and the distance processing unit is used for merging the preset acoustic model of the standard mandarin with a plurality of initial acoustic models based on relevant primitives if the acoustic distance corresponding to at least one initial acoustic model is greater than a first threshold and smaller than a second threshold, taking the merged acoustic model as a final acoustic model of the acquired voice data, and determining a mandarin type of a certain dialect to which the acquired voice data belongs according to the final acoustic model and the preset acoustic models of the dialect mandarins.
14. A storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform a method of voice capture by a voice interaction device as claimed in any one of claims 1 to 11.
15. A terminal device comprising a processor and a storage medium, the processor configured to implement instructions;
the storage medium is used for storing a plurality of instructions for loading by a processor and executing the voice acquisition method of the voice interaction device according to any one of claims 1 to 4.
16. A voice acquisition system comprising a voice interaction device and a server, the voice interaction device being the voice interaction device of claim 12;
the server is a server according to claim 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711324790.9A CN108010518B (en) | 2017-12-13 | 2017-12-13 | Voice acquisition method, system and storage medium of voice interaction equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711324790.9A CN108010518B (en) | 2017-12-13 | 2017-12-13 | Voice acquisition method, system and storage medium of voice interaction equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108010518A CN108010518A (en) | 2018-05-08 |
CN108010518B true CN108010518B (en) | 2022-08-23 |
Family
ID=62058447
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711324790.9A Active CN108010518B (en) | 2017-12-13 | 2017-12-13 | Voice acquisition method, system and storage medium of voice interaction equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108010518B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111210826B (en) * | 2019-12-26 | 2022-08-05 | 深圳市优必选科技股份有限公司 | Voice information processing method and device, storage medium and intelligent terminal |
CN111161732B (en) * | 2019-12-30 | 2022-12-13 | 秒针信息技术有限公司 | Voice acquisition method and device, electronic equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7392185B2 (en) * | 1999-11-12 | 2008-06-24 | Phoenix Solutions, Inc. | Speech based learning/training system using semantic decoding |
US9564125B2 (en) * | 2012-11-13 | 2017-02-07 | GM Global Technology Operations LLC | Methods and systems for adapting a speech system based on user characteristics |
CN107170454B (en) * | 2017-05-31 | 2022-04-05 | Oppo广东移动通信有限公司 | Speech recognition method and related product |
-
2017
- 2017-12-13 CN CN201711324790.9A patent/CN108010518B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108010518A (en) | 2018-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108305633B (en) | Speech verification method, apparatus, computer equipment and computer readable storage medium | |
CN107818798B (en) | Customer service quality evaluation method, device, equipment and storage medium | |
CN110534099B (en) | Voice wake-up processing method and device, storage medium and electronic equipment | |
CN105632486B (en) | Voice awakening method and device of intelligent hardware | |
CN107039050B (en) | Automatic testing method and device for voice recognition system to be tested | |
US8972260B2 (en) | Speech recognition using multiple language models | |
CN103021409B (en) | A kind of vice activation camera system | |
CN109686383B (en) | Voice analysis method, device and storage medium | |
CN107220532B (en) | Method and apparatus for recognizing user identity through voice | |
CN109331470B (en) | Method, device, equipment and medium for processing answering game based on voice recognition | |
CN109979440B (en) | Keyword sample determination method, voice recognition method, device, equipment and medium | |
CN108447471A (en) | Audio recognition method and speech recognition equipment | |
CN102543071A (en) | Voice recognition system and method used for mobile equipment | |
KR101677859B1 (en) | Method for generating system response using knowledgy base and apparatus for performing the method | |
CN114038457B (en) | Method, electronic device, storage medium, and program for voice wakeup | |
CN112967725A (en) | Voice conversation data processing method and device, computer equipment and storage medium | |
US20140236597A1 (en) | System and method for supervised creation of personalized speech samples libraries in real-time for text-to-speech synthesis | |
CN110364147B (en) | Awakening training word acquisition system and method | |
CN105551485A (en) | Audio file retrieval method and system | |
CN112669842A (en) | Man-machine conversation control method, device, computer equipment and storage medium | |
CN111161713A (en) | Voice gender identification method and device and computing equipment | |
CN108010518B (en) | Voice acquisition method, system and storage medium of voice interaction equipment | |
JP6448950B2 (en) | Spoken dialogue apparatus and electronic device | |
WO2014176489A2 (en) | A system and method for supervised creation of personalized speech samples libraries in real-time for text-to-speech synthesis | |
CN110809796B (en) | Speech recognition system and method with decoupled wake phrases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |