CN111428860B - Method and device for reducing power consumption of time delay neural network model - Google Patents
Method and device for reducing power consumption of time delay neural network model Download PDFInfo
- Publication number
- CN111428860B CN111428860B CN202010163869.3A CN202010163869A CN111428860B CN 111428860 B CN111428860 B CN 111428860B CN 202010163869 A CN202010163869 A CN 202010163869A CN 111428860 B CN111428860 B CN 111428860B
- Authority
- CN
- China
- Prior art keywords
- neural network
- network model
- time delay
- delay neural
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003062 neural network model Methods 0.000 title claims abstract description 190
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 26
- 238000012795 verification Methods 0.000 claims description 13
- 230000008676 import Effects 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 8
- 238000005520 cutting process Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 description 11
- 230000003111 delayed effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Mobile Radio Communication Systems (AREA)
- Power Sources (AREA)
Abstract
The invention discloses a method and a device for reducing power consumption of a time delay neural network model. The method comprises the following steps: acquiring a first time delay neural network model; decomposing an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model; parameter adjustment is carried out on the third time delay neural network model so as to obtain a fourth time delay neural network model; and importing the fourth delay neural network model into a preset embedded device. By the technical scheme, the calculation amount is reduced due to the small quantity of the fourth delay network parameters, so that the power consumption of the preset equipment is greatly reduced, and the user experience is improved.
Description
Technical Field
The invention relates to the technical field of voice recognition, in particular to a method and a device for reducing power consumption of a time delay neural network model.
Background
A time-lapse neural network model (TDNN) is an artificial neural network structure, and TDNN is proposed to classify phonemes in a speech signal for automatic speech recognition because automatic determination of accurate segmentation or feature boundaries in speech recognition is difficult or impossible, whereas TDNN recognizes phonemes and their basic acoustic/speech features irrespective of position in time, and is not affected by time offset.
Although the identification effect of the TDNN is good, the network parameter quantity of the TDNN is large, and the calculation amount is also large due to the large parameter quantity, so that the power consumption of the equipment end with the TDNN is large, and the user experience is affected.
Disclosure of Invention
The invention provides a method and a device for reducing power consumption of a time delay neural network model, wherein the technical scheme is as follows:
according to a first aspect of an embodiment of the present invention, there is provided a method for reducing power consumption of a latency neural network model, including:
acquiring a first time delay neural network model;
decomposing an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model;
parameter adjustment is carried out on the third time delay neural network model so as to obtain a fourth time delay neural network model;
and importing the fourth delay neural network model into a preset embedded device.
In one embodiment, the decomposing the original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model includes:
cutting an original matrix in the first time delay neural network model into the two target matrices by utilizing an SVD (singular value decomposition) technology;
and constructing a model structure through the two target matrixes based on the first time delay neural network model so as to obtain the second time delay neural network model.
In one embodiment, the training the second delayed neural network model to obtain a third delayed neural network model includes:
acquiring a plurality of pieces of voice data;
determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set;
and training the second time delay neural network model through the training set to obtain the third time delay neural network model.
In one embodiment, the performing parameter adjustment on the third delay neural network model to obtain a fourth delay neural network model includes:
determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and carrying out parameter adjustment on the third time delay neural network model through the verification set to obtain the fourth time delay neural network model.
In one embodiment, the importing the fourth time delay neural network model into an embedded device includes:
acquiring an import instruction of the embedded equipment;
judging whether the fourth delay neural network model meets a preset standard or not;
and when the fourth delay neural network model meets preset standards, importing the fourth delay neural network model into the embedded equipment according to the importing instruction.
According to a second aspect of an embodiment of the present invention, there is provided an apparatus for reducing power consumption of a delayed neural network model, including:
the acquisition module is used for acquiring a first time delay neural network model;
the training module is used for decomposing an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model;
the adjusting module is used for carrying out parameter adjustment on the third time delay neural network model so as to obtain a fourth time delay neural network model;
and the importing module is used for importing the fourth time delay neural network model into preset embedded equipment.
In one embodiment, the training module comprises:
a clipping sub-module, configured to clip an original matrix in the first time delay neural network model into the two target matrices by using an SVD technique;
and the construction submodule is used for constructing a model structure through the two target matrixes based on the first time delay neural network model so as to obtain the second time delay neural network model.
In one embodiment, the training module comprises:
the first acquisition sub-module is used for acquiring a plurality of pieces of voice data;
the first determining submodule is used for determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set;
and the first training sub-module is used for training the second time delay neural network model through the training set so as to obtain the third time delay neural network model.
In one embodiment, the adjustment module includes:
the second determining submodule is used for determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and the adjustment sub-module is used for carrying out parameter adjustment on the third time delay neural network model through the verification set so as to obtain the fourth time delay neural network model.
In one embodiment, the import module includes:
the second acquisition sub-module is used for acquiring the import instruction of the embedded equipment;
the judging submodule is used for judging whether the fourth delay neural network model meets a preset standard or not;
and the importing sub-module is used for importing the fourth time delay neural network model into the embedded equipment according to the importing instruction when the fourth time delay neural network model meets a preset standard.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
acquiring a first delay neural network model, decomposing an original matrix in the first delay neural network model into two target matrixes, obtaining a second delay neural network model, training the second delay neural network model, obtaining a third delay neural network model, performing parameter adjustment on the third delay neural network model, obtaining a fourth delay neural network model, finally guiding the fourth delay neural network model into preset embedded equipment, greatly reducing the parameter amount in the second delay neural network obtained by decomposing the original matrix in the first delay neural network into the two target matrixes, training the second delay neural network to obtain a third delay neural network, performing parameter adjustment on the third delay neural network to obtain a fourth delay neural network with a small parameter amount and good recognition effect, guiding the fourth delay neural network into preset equipment, and further greatly reducing the calculated parameter amount and improving the user experience when the preset equipment is used.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart of a method for reducing power consumption of a delay neural network model in an embodiment of the invention;
FIG. 2 is a flowchart of another method for reducing power consumption of a delayed neural network model according to an embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus for reducing power consumption of a delayed neural network model in an embodiment of the invention;
fig. 4 is a block diagram of another apparatus for reducing power consumption of a delayed neural network model according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Fig. 1 is a flowchart of a method for reducing power consumption of a neural network model according to an embodiment of the present invention, as shown in fig. 1, the method may be implemented as steps S11-S14 as follows:
in step S11, a first time delay neural network model is acquired;
in step S12, decomposing the original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model; the original matrix is a large weight matrix (a weight matrix with large parameter quantity), and the target matrix is a small weight matrix after clipping.
In step S13, performing parameter adjustment on the third delay neural network model to obtain a fourth delay neural network model;
in step S14, a fourth time delay neural network model is imported to the preset embedded device.
Obtaining a first delay neural network model, decomposing an original matrix in the first delay neural network model into two target matrices, obtaining a second delay neural network model, training the second delay neural network model, obtaining a third delay neural network model, adjusting parameters of the third delay neural network model, obtaining a fourth delay neural network model, finally guiding the fourth delay neural network model into preset embedded equipment, greatly reducing the parameter amount in the second delay neural network model obtained after decomposing the original matrix in the first delay neural network model into the two target matrices, training the second delay neural network model to obtain a third delay neural network model, adjusting parameters of the third delay neural network model to obtain a fourth delay neural network model with small parameter amount and good recognition effect, guiding the fourth delay neural network model into the preset equipment, greatly reducing the power consumption of the preset equipment due to the small parameter amount of the fourth delay neural network model, and greatly improving the user experience when the equipment is used.
As shown in fig. 2, in one embodiment, the above step S12 may be implemented as follows S121-S122:
in step S121, the original matrix in the first time delay neural network model is cut into two target matrices by using SVD technology; the SVD technology is singular value decomposition technology, a large weight matrix in a first time delay neural network is cut into two small weight matrices through the SVD technology, and the quantity of parameters of the two small weight matrices obtained through cutting is greatly reduced.
In step S122, based on the first delay neural network model, the model structure is constructed through two target matrices to obtain a second delay neural network model, wherein the model structure is constructed through two small weight matrices by using the network model structure of the first delay neural network model to obtain the second delay neural network model.
The original matrix in the first delay neural network model is cut into two small target matrices through the SVD technology, then the two target matrices are used for structural construction, a second delay neural network can be obtained, and the parameter quantity in the second delay neural network is reduced due to cutting operation, so that the calculated quantity is also greatly reduced.
In one embodiment, the training the second delayed neural network model to obtain a third delayed neural network model includes:
acquiring a plurality of pieces of voice data;
determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set; wherein the number of training sets may be, but is not limited to, four fifths of a number of voices.
And training the second time delay neural network model through the training set to obtain the third time delay neural network model.
The second time delay neural network model is trained through the training set, a third time delay neural network model can be obtained, and the identification accuracy of the third time delay neural network model is high.
In one embodiment, the performing parameter adjustment on the third delay neural network model to obtain a fourth delay neural network model includes:
determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and carrying out parameter adjustment on the third time delay neural network model through the verification set to obtain the fourth time delay neural network model.
And the third time delay neural network model is subjected to parameter adjustment through the verification set, parameters in the third time delay neural network model are optimized, and a fourth time delay neural network model is obtained after optimization, so that the identification accuracy of the fourth time delay neural network model is high.
In one embodiment, the importing the fourth time delay neural network model into an embedded device includes:
acquiring an import instruction of the embedded equipment;
judging whether the fourth delay neural network model meets a preset standard or not; the preset criteria may be, but is not limited to, whether the recognition accuracy of the fourth time delay neural network model meets the user's requirement.
And when the fourth delay neural network model meets preset standards, importing the fourth delay neural network model into the embedded equipment according to the importing instruction.
And acquiring an import instruction, judging whether the fourth delay neural network model meets a preset standard or not, importing the fourth delay neural network model into preset equipment if the fourth delay neural network model meets the preset standard, and avoiding the phenomenon that the model is imported into the preset equipment because the model does not meet the preset standard, so that the user experience is poor when the model is used, and saving time.
For the above method for reducing power consumption of a delay neural network model provided by the embodiment of the present invention, the embodiment of the present invention further provides a device for reducing power consumption of a delay neural network model, as shown in fig. 3, where the device includes:
an acquisition module 31 for acquiring a first time delay neural network model;
the training module 32 is configured to decompose an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model;
the adjustment module 33 is configured to perform parameter adjustment on the third delay neural network model to obtain a fourth delay neural network model;
an importing module 34, configured to import the fourth time delay neural network model into a preset embedded device.
As shown in fig. 4, in one embodiment, the training module 32 includes:
a clipping sub-module 321, configured to clip the original matrix in the first time delay neural network model into the two target matrices by using SVD technology;
and a constructing sub-module 322, configured to construct a model structure based on the first time delay neural network model through the two target matrices, so as to obtain the second time delay neural network model.
In one embodiment, the training module comprises:
the first acquisition sub-module is used for acquiring a plurality of pieces of voice data;
the first determining submodule is used for determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set;
and the first training sub-module is used for training the second time delay neural network model through the training set so as to obtain the third time delay neural network model.
In one embodiment, the adjustment module includes:
the second determining submodule is used for determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and the adjustment sub-module is used for carrying out parameter adjustment on the third time delay neural network model through the verification set so as to obtain the fourth time delay neural network model.
In one embodiment, the import module includes:
the second acquisition sub-module is used for acquiring the import instruction of the embedded equipment;
the judging submodule is used for judging whether the fourth delay neural network model meets a preset standard or not;
and the importing sub-module is used for importing the fourth time delay neural network model into the embedded equipment according to the importing instruction when the fourth time delay neural network model meets a preset standard.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (6)
1. A method for reducing power consumption of a time-lapse neural network model, comprising:
acquiring a first time delay neural network model;
decomposing an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model;
parameter adjustment is carried out on the third time delay neural network model so as to obtain a fourth time delay neural network model;
importing the fourth time delay neural network model into a preset embedded device:
the training the second time delay neural network model to obtain a third time delay neural network model includes:
acquiring a plurality of pieces of voice data;
determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set;
training the second time delay neural network model through the training set to obtain a third time delay neural network model;
the parameter adjustment is performed on the third delay neural network model to obtain a fourth delay neural network model, including:
determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and carrying out parameter adjustment on the third time delay neural network model through the verification set to obtain the fourth time delay neural network model.
2. The method of claim 1, wherein decomposing the original matrix in the first time-lapse neural network model into two target matrices to obtain a second time-lapse neural network model comprises:
cutting an original matrix in the first time delay neural network model into the two target matrices by utilizing an SVD (singular value decomposition) technology;
and constructing a model structure through the two target matrixes based on the first time delay neural network model so as to obtain the second time delay neural network model.
3. The method of claim 1, wherein the importing the fourth time delay neural network model into an embedded device comprises:
acquiring an import instruction of the embedded equipment;
judging whether the fourth delay neural network model meets a preset standard or not;
and when the fourth delay neural network model meets preset standards, importing the fourth delay neural network model into the embedded equipment according to the importing instruction.
4. An apparatus for reducing power consumption of a time-lapse neural network model, comprising:
the acquisition module is used for acquiring a first time delay neural network model;
the training module is used for decomposing an original matrix in the first time delay neural network model into two target matrices to obtain a second time delay neural network model; training the second time delay neural network model to obtain a third time delay neural network model;
the adjusting module is used for carrying out parameter adjustment on the third time delay neural network model so as to obtain a fourth time delay neural network model;
the importing module is used for importing the fourth time delay neural network model into preset embedded equipment;
the training module comprises:
the first acquisition sub-module is used for acquiring a plurality of pieces of voice data;
the first determining submodule is used for determining a first preset number of pieces of voice data in the plurality of pieces of voice data as a training set;
the first training submodule is used for training the second time delay neural network model through the training set so as to obtain the third time delay neural network model;
the adjustment module comprises:
the second determining submodule is used for determining a second preset number of pieces of voice data in the plurality of pieces of voice data as a verification set;
and the adjustment sub-module is used for carrying out parameter adjustment on the third time delay neural network model through the verification set so as to obtain the fourth time delay neural network model.
5. The apparatus of claim 4, wherein the training module comprises:
a clipping sub-module, configured to clip an original matrix in the first time delay neural network model into the two target matrices by using an SVD technique;
and the construction submodule is used for constructing a model structure through the two target matrixes based on the first time delay neural network model so as to obtain the second time delay neural network model.
6. The apparatus of claim 4, wherein the import module comprises:
the second acquisition sub-module is used for acquiring the import instruction of the embedded equipment;
the judging submodule is used for judging whether the fourth delay neural network model meets a preset standard or not;
and the importing sub-module is used for importing the fourth time delay neural network model into the embedded equipment according to the importing instruction when the fourth time delay neural network model meets a preset standard.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010163869.3A CN111428860B (en) | 2020-03-11 | 2020-03-11 | Method and device for reducing power consumption of time delay neural network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010163869.3A CN111428860B (en) | 2020-03-11 | 2020-03-11 | Method and device for reducing power consumption of time delay neural network model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111428860A CN111428860A (en) | 2020-07-17 |
CN111428860B true CN111428860B (en) | 2023-05-30 |
Family
ID=71553403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010163869.3A Active CN111428860B (en) | 2020-03-11 | 2020-03-11 | Method and device for reducing power consumption of time delay neural network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111428860B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019139760A1 (en) * | 2018-01-12 | 2019-07-18 | Microsoft Technology Licensing, Llc | Automated localized machine learning training |
CN110767231A (en) * | 2019-09-19 | 2020-02-07 | 平安科技(深圳)有限公司 | Voice control equipment awakening word identification method and device based on time delay neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10679629B2 (en) * | 2018-04-09 | 2020-06-09 | Amazon Technologies, Inc. | Device arbitration by multiple speech processing systems |
-
2020
- 2020-03-11 CN CN202010163869.3A patent/CN111428860B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019139760A1 (en) * | 2018-01-12 | 2019-07-18 | Microsoft Technology Licensing, Llc | Automated localized machine learning training |
CN110767231A (en) * | 2019-09-19 | 2020-02-07 | 平安科技(深圳)有限公司 | Voice control equipment awakening word identification method and device based on time delay neural network |
Non-Patent Citations (1)
Title |
---|
苗凤娟 ; 王一鸣 ; 陶佰睿 ; .基于软件定义片上可编程系统的卷积神经网络加速器设计.科学技术与工程.2019,(34),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111428860A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10332507B2 (en) | Method and device for waking up via speech based on artificial intelligence | |
CN102486922B (en) | Speaker recognition method, device and system | |
CN106847305B (en) | Method and device for processing recording data of customer service telephone | |
CN108920510A (en) | Automatic chatting method, device and electronic equipment | |
CN106356077B (en) | A kind of laugh detection method and device | |
CN110610698B (en) | Voice labeling method and device | |
CN112348110B (en) | Model training and image processing method and device, electronic equipment and storage medium | |
CN110890088B (en) | Voice information feedback method and device, computer equipment and storage medium | |
CN106531195B (en) | A kind of dialogue collision detection method and device | |
CN111428860B (en) | Method and device for reducing power consumption of time delay neural network model | |
CN112052686B (en) | Voice learning resource pushing method for user interactive education | |
CN109727603A (en) | Method of speech processing, device, user equipment and storage medium | |
CN110708619B (en) | Word vector training method and device for intelligent equipment | |
CN112466310A (en) | Deep learning voiceprint recognition method and device, electronic equipment and storage medium | |
CN112288032B (en) | Method and device for quantitative model training based on generation of confrontation network | |
CN113673349B (en) | Method, system and device for generating Chinese text by image based on feedback mechanism | |
CN1342969A (en) | Method for recogniting voice | |
CN114783424A (en) | Text corpus screening method, device, equipment and storage medium | |
CN111768764B (en) | Voice data processing method and device, electronic equipment and medium | |
CN111596261B (en) | Sound source positioning method and device | |
CN114420136A (en) | Method and device for training voiceprint recognition model and storage medium | |
CN109887487B (en) | Data screening method and device and electronic equipment | |
CN103390404A (en) | Information processing apparatus, information processing method and information processing program | |
CN111161708A (en) | Voice information processing method and device | |
CN115116430A (en) | Voice data analysis method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |