CN106303695A - Audio translation multiple language characters processing method and system - Google Patents
Audio translation multiple language characters processing method and system Download PDFInfo
- Publication number
- CN106303695A CN106303695A CN201610647853.3A CN201610647853A CN106303695A CN 106303695 A CN106303695 A CN 106303695A CN 201610647853 A CN201610647853 A CN 201610647853A CN 106303695 A CN106303695 A CN 106303695A
- Authority
- CN
- China
- Prior art keywords
- file
- audio
- analysis processing
- speech
- format analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/55—Rule-based translation
- G06F40/56—Natural language generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Abstract
The invention provides a kind of audio translation is multiple language characters processing method and system, comprises the steps: to analyze voice audio file, produces multiple wave files according to the degree of strength of user speech;Wave character according to each wave file determines beginning and the end of multiple voice, generates multiple first and formats result, obtains multiple first speech audio section;Each first speech audio section utilizes speech recognition software carry out the voice conversion to word, and generates the multiple second result formatted, obtain multiple original character files that multiple first speech audio section is corresponding;First format analysis processing result, original character file are merged according to serial number, obtains the subtitle file of corresponding audio frequency.It realizes rapid, high volume and generates captions the most batch-automatedly.
Description
Technical field
The present invention relates to the audio signal processing technique in a kind of video/audio, be multi-lingual more particularly to a kind of audio translation
Speech literal processing method and system.
Background technology
Universal along with film and TV play Global offering is more and more stronger to the demand of multilingual captions.Traditional captions
Language is translated based on artificial, is divided into two stages: one, carries out getting ready of captions according to voice, i.e. tells one section of language
The time started of sound and end time;Two, then getting ready according to previous step, translate the language word that this time point is corresponding
Curtain.But the method uses pure artificial mode, so inefficiency, it is impossible to produce captions rapid, high volume.
Summary of the invention
In order to solve above-mentioned technical problem, the invention provides a kind of audio translation and be multiple language characters processing method and be
System, it is in video or audio frequency, utilizes the sound wave of audio frequency judge and get ready, is then multi-lingual by corresponding audio translation
Speech, it is achieved rapid, high volume generates captions the most batch-automatedly.
For achieving the above object, the invention provides a kind of audio translation is multiple language characters processing method, including as follows
Step:
Analyze voice audio file, produce multiple wave files according to the degree of strength of user speech;
Wave character according to each wave file determines beginning and the end of multiple voice, generates at multiple first form
Reason result (hereinafter referred to as R1), obtains multiple first speech audio section (hereinafter referred to as R2);
Utilize speech recognition software to carry out the voice conversion to word each first speech audio section R2, and generate multiple
Second format analysis processing result (hereinafter referred to as R3), obtains multiple original character files that multiple first speech audio section is corresponding;
First format analysis processing result R1 and original character file are merged according to serial number, obtains corresponding audio frequency
Subtitle file.
For realizing the object of the invention, also providing for a kind of audio translation is multiple language characters processing system, including:
Waveform extracting module, is used for analyzing voice audio file, produces multiple waveforms according to the degree of strength of user speech
File;
Waveform recognition module, for determining beginning and the end of multiple voice according to the wave character of each wave file,
Generate multiple first format analysis processing result R1, obtain multiple first speech audio section R2.
Language and characters generation module, is used for utilizing speech recognition software that each first speech audio section R2 is carried out voice
To the conversion of word, and generate multiple second format analysis processing result R3, obtain corresponding multiple of multiple first speech audio section R2
Original character file;
Merge module, for the first format analysis processing result R1, original character file are merged according to serial number, generate
Subtitle file, exports and is merged in video.
The audio translation using the present invention is multiple language characters processing method and system, and it is in video or audio frequency, profit
Judge with the sound wave of audio frequency and get ready, being then multilingual by corresponding audio translation, it is achieved rapid, high volume even batch from
Dynamicization ground generates captions.
Accompanying drawing explanation
Fig. 1 be the audio translation of the embodiment of the present invention be multiple language characters process flow figure;
Fig. 2 be the audio translation of the embodiment of the present invention be multiple language characters processing system schematic diagram.
Detailed description of the invention
In 1~2 couple of present invention, detailed representative embodiment is described below with reference to the accompanying drawings.
The audio translation of the embodiment of the present invention is multiple language characters processing method, as it is shown in figure 1, comprise the steps:
Step S100, extracts the audio frequency in video file, obtains the speech audio file of described video file.
Step S200, analyzes described speech audio file, produces multiple wave files according to the degree of strength of user speech.
Extracting wave file from speech audio file, this wave file illustrates the power of voice, and it indicates sound
Waveform strong and weak.
Analyze described speech audio file, produce a waveform representing sound intensity according to the degree of strength of user speech
File, this wave file has the wave character of-128 to+127 audio frequency to represent the sound intensity relation of each frame.
Step S300, determines beginning and the end of multiple voice, generates multiple according to the wave character of each wave file
First format analysis processing result R1, obtains multiple first speech audio section R2.
As a kind of embodiment, can carry out judging beginning and the knot of voice according to sound waveform time threshold values
Bundle, such as sound waveform quiescent time are more than 10 milliseconds, then it represents that this section of voice starts or terminate.
From to wave file based on auditory perception property, such as fundamental tone and pitch etc., can extract and be used for Auditory Perception
Retrieval, it is also possible to extract other acoustic featuress that can distinguish alternative sounds, forms characteristic vector.Such as, temporally sheet calculates
One group of auditory perceptual feature: fundamental tone, loudness, tone etc..In view of sound waveform over time, final characteristic vector will
It is the statistical value of these features, such as, represents by meansigma methods, variance and autocorrelation value.Sound effect data are classified, as
Animal sound, machine sound, musical instrument sound, voice and other natural phonations etc..This result generates the first format analysis processing result R1.
This first format analysis processing result R1 includes but not limited to a serial number, the time started of timeslice, timeslice
End time.
Step S400, utilizes speech recognition software to carry out the voice conversion to word each first speech audio section R2,
And generate multiple second format analysis processing result R3, obtain multiple original character files that multiple first speech audio section R2 is corresponding.
From speech audio, cut out the audio file R2 of this time period according to the first format analysis processing result R1, and pass through voice
Identify that software carries out voice and to the conversion of the original character result of preset format languages and generates the second format analysis processing result R3, should
Second format analysis processing result R3 includes but not limited to serial number and the audio frequency of this time period of R1 in the first format analysis processing result
Original character result.
Described preset format languages include but not limited in Chinese, English, Japanese, Arabic, French, German etc.
One or several.
Original character refers to the source language of the languages that this voice is corresponding, can preset or be identified by the present invention and
Set, and be automatically all translated as this source language.
Described speech recognition software includes but not limited to OFFICE speech recognition software, Baidu's speech recognition software, news
One or more in the speech recognitions such as rumours speech, preferably, it is possible to use multiple voice identification software identifies respectively, and select
Select the speech recognition of optimum as original character file.
Step S500, the translation that original character file is transformed into other Languages processes, and generates the 3rd format analysis processing
Result R4, obtains multiple multiple language characters files of other polyglot.
The word of the second format analysis processing result use existing translation software be transformed into the translation of language of setting
Process, generate the 3rd format analysis processing result R4.3rd format analysis processing result R4 includes but not limited to the first format analysis processing result
Serial number in R1 and the translation result after carrying out language translation process.
Step S600, by the first format analysis processing result R1, original character file with multiple language characters file according to serial number
Merge the subtitle file obtaining described audio frequency, the subtitle file corresponding by obtaining described video/audio, output to video/
In audio frequency.
First format analysis processing result R1, the second format analysis processing result R3 and the 3rd format analysis processing result R4 are carried out according to sequence
Row number merge, and produce the result of a final translation, and the result of generation can be such that
The audio translation using the present invention is multiple language characters processing method and system, and it is in video or audio frequency, profit
Judge with the sound wave of audio frequency and get ready, being then multilingual by corresponding audio translation, it is achieved rapid, high volume even batch from
Dynamicization ground generates captions.
In order to solve above-mentioned technical problem, it is multiple language characters processing system that the embodiment of the present invention also provides for audio translation,
As in figure 2 it is shown, include:
Waveform extracting module 20, is used for analyzing voice audio file, produces multiple ripples according to the degree of strength of user speech
Shape file;
Waveform recognition module 30, for determining beginning and the knot of multiple voice according to the wave character of each wave file
Bundle, generates multiple first format analysis processing result R1, obtains multiple first speech audio section R2.
Language and characters generation module 40, for utilizing speech recognition software to carry out voice each first speech audio section
To the conversion of word, and generate multiple second format analysis processing result R3, obtain corresponding multiple former of multiple first speech audio section
Beginning text file;
Merge module 60, for being merged with other polyglot text file by source language text file, generate captions
File, exports and is merged in video.
As a kind of embodiment, described multiple language characters processing system, also include audio extraction module 10, be used for carrying
Take the audio frequency in video file, obtain the speech audio file of described video file;
Described merging module 60, also includes output sub-module 70;
Described output sub-module 70, for the subtitle file that will generate, exports and is merged in video.
As a kind of embodiment, described multiple language characters processing system, also include translation module 50, for according to many
Individual source language text file utilizes translation software to generate other polyglot text file;
Described merging module 60, also includes that translation merges submodule 80;
Described translation merges submodule 80, for by the first format analysis processing result R1, original character file and described multi-lingual
The merging of speech text file, obtains multilingual subtitle file.
Audio translation in the embodiment of the present invention is multiple language characters processing system, and using with audio translation is multilingual literary composition
The process work that word processing method is essentially identical, obtains essentially identical result, obtains essentially identical beneficial effect, therefore,
In embodiments of the present invention, describe in detail the most one by one.
The audio translation of the embodiment of the present invention is multiple language characters processing method and system, and it is in video or audio frequency,
The sound wave utilizing audio frequency judges and gets ready, is then multilingual by corresponding audio translation, it is achieved rapid, high volume even batch
Automatically generate captions.
The representative embodiment of the present invention has obtained detailed description referring to the drawings.Ability is only given in these detailed descriptions
Field technique personnel further believe content, for the preferred aspect of the enforcement present invention, and will not be to the model of the present invention
Enclose and limit.Only have claim to be used for determining protection scope of the present invention.Therefore, the feature in foregoing detailed description and
The combination of step is dispensable for implementing the present invention in the broadest scope, and the most only spy to the present invention
The representative embodiment not described in detail provides teaching.Additionally, use embodiment, in description to obtain being attached with of the present invention
In provide the various different feature of teaching and can combine in several ways, but these modes are listed out the most especially
Come.
Claims (10)
1. audio translation is a multiple language characters processing method, comprises the steps:
Analyze voice audio file, produce multiple wave files according to the degree of strength of user speech;
Wave character according to each wave file determines beginning and the end of multiple voice, generates multiple first format analysis processing knot
Really R1, obtains multiple speech audio section R2;
Utilize speech recognition software to carry out the voice conversion to word each first speech audio section R2, and generate multiple second
Format analysis processing result R3, obtains multiple original character files that multiple described first format analysis processing result R1 is corresponding;
Described first format analysis processing result R1, described second format analysis processing result R3 are merged according to serial number, obtains phase
The subtitle file of the audio frequency answered.
Multiple language characters processing method the most according to claim 1, it is characterised in that:
Before described analysis voice audio file, also comprise the steps:
Extract the audio frequency in video file, obtain the speech audio file of described video file;
After described second format analysis processing result R3, described first format analysis processing result R1 merge according to serial number,
Also comprise the steps:
The subtitle file corresponding by obtaining described video/audio, exports in video/audio.
Multiple language characters processing method the most according to claim 1 and 2, it is characterised in that:
After obtaining multiple original character file, also comprise the steps:
The translation that original character file is transformed into other Languages processes, and generates the 3rd format analysis processing result R4, obtains it
Multiple multiple language characters files of its polyglot;
Described merging also includes the merging to described multiple language characters file.
Multiple language characters processing method the most according to claim 3, it is characterised in that:
Described first format analysis processing result R1 includes but not limited to a serial number, the time started of timeslice, the knot of timeslice
The bundle time.
Multiple language characters processing method the most according to claim 4, it is characterised in that:
Described second format analysis processing result R3 includes the serial number in the first format analysis processing result R1 and the audio frequency of this time period
Original character result.
Multiple language characters processing method the most according to claim 3, it is characterised in that:
Described 3rd format analysis processing result R4 includes but not limited to the serial number in the first format analysis processing result R1 and carries out language
Translation result after speech translation process.
Multiple language characters processing method the most according to claim 3, it is characterised in that:
Described utilize speech recognition software to carry out the voice conversion to word, comprise the steps:
Utilize multiple voice identification software to identify respectively, and select the speech recognition of optimum as original character file.
8. an audio translation is multiple language characters processing system, it is characterised in that including:
Waveform extracting module, is used for analyzing voice audio file, produces multiple wave files according to the degree of strength of user speech;
Waveform recognition module, for determining beginning and the end of multiple voice according to the wave character of each wave file, generates
Multiple first format analysis processing results R1, obtain multiple first speech audio section R2;
Language and characters generation module, for utilizing speech recognition software to carry out voice to literary composition to each first speech audio section R2
The conversion of word, and generate multiple second format analysis processing result R3, obtain the multiple original literary composition that multiple first speech audio section is corresponding
Word file;
Merge module, for being merged with other polyglot text file by original language spoken and written languages file, generate subtitle file,
Export and be merged in video.
Multiple language characters processing system the most according to claim 8, it is characterised in that:
Also include audio extraction module, for extracting the audio frequency in video file, obtain the speech audio literary composition of described video file
Part;
Described merging module, also includes output sub-module;
Described output sub-module, for the subtitle file that will generate, exports and is merged in video.
Multiple language characters processing system the most according to claim 8 or claim 9, it is characterised in that:
Also include translation module, for utilizing translation software according to multiple source language text files, generate the 3rd format analysis processing
Result R4, obtains other polyglot text file;
Described merging module, also includes that translation merges submodule;
Described translation merges submodule, for formatting result R1, original character file and described multiple language characters literary composition
The merging of part, obtains multilingual subtitle file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610647853.3A CN106303695A (en) | 2016-08-09 | 2016-08-09 | Audio translation multiple language characters processing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610647853.3A CN106303695A (en) | 2016-08-09 | 2016-08-09 | Audio translation multiple language characters processing method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106303695A true CN106303695A (en) | 2017-01-04 |
Family
ID=57667173
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610647853.3A Pending CN106303695A (en) | 2016-08-09 | 2016-08-09 | Audio translation multiple language characters processing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106303695A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106888392A (en) * | 2017-02-14 | 2017-06-23 | 广东九联科技股份有限公司 | A kind of Set Top Box automatic translation system and method |
CN108882017A (en) * | 2018-09-27 | 2018-11-23 | 广州市录音啦网络科技有限公司 | A kind of television terminal and its working method with interpretative function |
CN109658919A (en) * | 2018-12-17 | 2019-04-19 | 深圳市沃特沃德股份有限公司 | Interpretation method, device and the translation playback equipment of multimedia file |
WO2019119552A1 (en) * | 2017-12-20 | 2019-06-27 | 深圳市沃特沃德股份有限公司 | Method for translating continuous long speech file, and translation machine |
CN110471659A (en) * | 2019-08-16 | 2019-11-19 | 珠海格力电器股份有限公司 | Multilingual method and system, human-machine interface configuration software end and equipment end |
CN113038259A (en) * | 2021-03-05 | 2021-06-25 | 深圳市广程杰瑞科技有限公司 | Lesson quality feedback method and system for internet education |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101382937A (en) * | 2008-07-01 | 2009-03-11 | 深圳先进技术研究院 | Multimedia resource processing method based on speech recognition and on-line teaching system thereof |
US20120005701A1 (en) * | 2010-06-30 | 2012-01-05 | Rovi Technologies Corporation | Method and Apparatus for Identifying Video Program Material or Content via Frequency Translation or Modulation Schemes |
CN103167360A (en) * | 2013-02-21 | 2013-06-19 | 中国对外翻译出版有限公司 | Method for achieving multilingual subtitle translation |
WO2015019774A1 (en) * | 2013-08-09 | 2015-02-12 | シャープ株式会社 | Data generating device, data generating method, translation processing device, program, and data |
CN104795082A (en) * | 2015-03-26 | 2015-07-22 | 广州酷狗计算机科技有限公司 | Player and audio subtitle display method and device |
CN105245917A (en) * | 2015-09-28 | 2016-01-13 | 徐信 | System and method for generating multimedia voice caption |
CN105681890A (en) * | 2016-01-26 | 2016-06-15 | 广东欧珀移动通信有限公司 | Caption display method and device for video playing terminal |
-
2016
- 2016-08-09 CN CN201610647853.3A patent/CN106303695A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101382937A (en) * | 2008-07-01 | 2009-03-11 | 深圳先进技术研究院 | Multimedia resource processing method based on speech recognition and on-line teaching system thereof |
US20120005701A1 (en) * | 2010-06-30 | 2012-01-05 | Rovi Technologies Corporation | Method and Apparatus for Identifying Video Program Material or Content via Frequency Translation or Modulation Schemes |
CN103167360A (en) * | 2013-02-21 | 2013-06-19 | 中国对外翻译出版有限公司 | Method for achieving multilingual subtitle translation |
WO2015019774A1 (en) * | 2013-08-09 | 2015-02-12 | シャープ株式会社 | Data generating device, data generating method, translation processing device, program, and data |
CN104795082A (en) * | 2015-03-26 | 2015-07-22 | 广州酷狗计算机科技有限公司 | Player and audio subtitle display method and device |
CN105245917A (en) * | 2015-09-28 | 2016-01-13 | 徐信 | System and method for generating multimedia voice caption |
CN105681890A (en) * | 2016-01-26 | 2016-06-15 | 广东欧珀移动通信有限公司 | Caption display method and device for video playing terminal |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106888392A (en) * | 2017-02-14 | 2017-06-23 | 广东九联科技股份有限公司 | A kind of Set Top Box automatic translation system and method |
WO2019119552A1 (en) * | 2017-12-20 | 2019-06-27 | 深圳市沃特沃德股份有限公司 | Method for translating continuous long speech file, and translation machine |
CN108882017A (en) * | 2018-09-27 | 2018-11-23 | 广州市录音啦网络科技有限公司 | A kind of television terminal and its working method with interpretative function |
CN109658919A (en) * | 2018-12-17 | 2019-04-19 | 深圳市沃特沃德股份有限公司 | Interpretation method, device and the translation playback equipment of multimedia file |
CN110471659A (en) * | 2019-08-16 | 2019-11-19 | 珠海格力电器股份有限公司 | Multilingual method and system, human-machine interface configuration software end and equipment end |
CN110471659B (en) * | 2019-08-16 | 2023-07-21 | 珠海格力电器股份有限公司 | Multilingual implementation method and system, man-machine interface configuration software end and equipment end |
CN113038259A (en) * | 2021-03-05 | 2021-06-25 | 深圳市广程杰瑞科技有限公司 | Lesson quality feedback method and system for internet education |
CN113038259B (en) * | 2021-03-05 | 2023-09-08 | 河南校信通教育科技有限公司 | Method and system for feeding back class quality of Internet education |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106303695A (en) | Audio translation multiple language characters processing method and system | |
CN103503060B (en) | Speech syllable/vowel/phone boundary detection using auditory attention cues | |
CN106504754B (en) | A kind of real-time method for generating captions according to audio output | |
KR101990023B1 (en) | Method for chunk-unit separation rule and display automated key word to develop foreign language studying, and system thereof | |
US7881930B2 (en) | ASR-aided transcription with segmented feedback training | |
US20180039859A1 (en) | Joint acoustic and visual processing | |
JPS58130393A (en) | Voice recognition equipment | |
CN101751919A (en) | Spoken Chinese stress automatic detection method | |
EP1349145A3 (en) | System and method for providing information using spoken dialogue interface | |
CN106782508A (en) | The cutting method of speech audio and the cutting device of speech audio | |
JP2008158055A (en) | Language pronunciation practice support system | |
CN106782517A (en) | A kind of speech audio keyword filter method and device | |
CN110349565B (en) | Auxiliary pronunciation learning method and system for hearing-impaired people | |
CN114927126A (en) | Scheme output method, device and equipment based on semantic analysis and storage medium | |
CN116894442B (en) | Language translation method and system for correcting guide pronunciation | |
WO2004093078A1 (en) | Process for adding subtitles to video content | |
CN112231440A (en) | Voice search method based on artificial intelligence | |
CN112185341A (en) | Dubbing method, apparatus, device and storage medium based on speech synthesis | |
Cahyaningtyas et al. | Development of under-resourced Bahasa Indonesia speech corpus | |
CN114724589A (en) | Voice quality inspection method and device, electronic equipment and storage medium | |
CN114512118A (en) | Intelligent sentence dividing method based on sound spectrogram, computer device and storage medium | |
JP2003018462A (en) | Character inserting device and character inserting method | |
US20030212560A1 (en) | Speech synthesis apparatus and its method, and program | |
JP2009146043A (en) | Unit and method for voice translation, and program | |
Waghmare et al. | A Comparative Study of the Various Emotional Speech Databases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170104 |