CN112037769A - A training data generation method, apparatus, and computer-readable storage medium - Google Patents
A training data generation method, apparatus, and computer-readable storage medium Download PDFInfo
- Publication number
- CN112037769A CN112037769A CN202010738406.5A CN202010738406A CN112037769A CN 112037769 A CN112037769 A CN 112037769A CN 202010738406 A CN202010738406 A CN 202010738406A CN 112037769 A CN112037769 A CN 112037769A
- Authority
- CN
- China
- Prior art keywords
- information
- text
- training
- audio
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 36
- 230000011218 segmentation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Document Processing Apparatus (AREA)
Abstract
本发明公开了一种训练数据生成方法、装置以及计算机可读存储介质,包括:接收音频信息和对应的标注文本信息;生成对应于所述音频信息的语音识别文本信息和第一时间戳信息;内容匹配所述标注文本信息和语音识别文本信息,根据所述第一时间戳信息生成对应于所述标注文本信息的第二时间戳信息;根据所述第二时间戳信息,获取所述标注文本信息中的子文本训练信息和所述音频信息中的子音频训练信息。通过获取原始的音频信息以及标注文本信息,利用音频信息的时间戳信息从原始的音频信息以及标注文本信息中获取多个子音频训练信息和对应的子文本训练信息,从而得到大量并且高质量的语音训练数据,此过程效率高并且降低了耗费成本。
The invention discloses a training data generation method, device and computer-readable storage medium, comprising: receiving audio information and corresponding marked text information; generating speech recognition text information and first time stamp information corresponding to the audio information; The content matches the marked text information and the speech recognition text information, and second timestamp information corresponding to the marked text information is generated according to the first timestamp information; and the marked text is obtained according to the second timestamp information sub-text training information in the information and sub-audio training information in the audio information. By obtaining the original audio information and annotated text information, and using the timestamp information of the audio information to obtain multiple sub-audio training information and corresponding sub-text training information from the original audio information and the annotated text information, a large amount of high-quality speech can be obtained. training data, this process is efficient and reduces cost.
Description
技术领域technical field
本发明涉及人工智能领域,尤其涉及一种训练数据生成方法、装置以及计算机可读存储介质。The present invention relates to the field of artificial intelligence, and in particular, to a training data generation method, device and computer-readable storage medium.
背景技术Background technique
训练语音识别系统需要大量的语音和文本标注好的训练数据,现有获取训练数据的方案都是获取语音,由标注人员通过语音标注系统标注语音对应的文本;或者指定大量文本,由不同的说话人根据指定的文本来录制语音。通过大量的人工录制和标注,可以获取训练语音识别系统的训练数据。通过现有获取训练数据的方案需要消耗大量人力和时间成本,获取大量高质量的语音训练数据难度很高,导致训练语音识别系统的训练集匮乏。Training a speech recognition system requires a large amount of training data labeled with speech and text. The existing solutions for obtaining training data are to obtain speech, and the annotator will label the text corresponding to the speech through the speech annotation system; or specify a large amount of text, and different speech Humans record speech based on specified text. Through a large number of manual recordings and annotations, the training data for training the speech recognition system can be obtained. Existing solutions for obtaining training data require a lot of manpower and time costs, and it is very difficult to obtain a large amount of high-quality speech training data, resulting in a lack of training sets for training speech recognition systems.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供了一种训练数据生成方法、装置以及计算机可读存储介质,具有高效率地获取大量并且高质量的语音训练数据并降低了耗费成本的技术效果。Embodiments of the present invention provide a training data generating method, apparatus, and computer-readable storage medium, which have the technical effects of efficiently acquiring a large amount of high-quality speech training data and reducing cost.
本发明一方面提供一种训练数据生成方法,所述方法包括:接收音频信息和对应的标注文本信息;生成对应于所述音频信息的语音识别文本信息和第一时间戳信息;内容匹配所述标注文本信息和语音识别文本信息,根据所述第一时间戳信息生成对应于所述标注文本信息的第二时间戳信息;根据所述第二时间戳信息,获取所述标注文本信息中的子文本训练信息和所述音频信息中的子音频训练信息。One aspect of the present invention provides a method for generating training data, the method comprising: receiving audio information and corresponding labeled text information; generating speech recognition text information and first timestamp information corresponding to the audio information; Annotating text information and speech recognition text information, generating second timestamp information corresponding to the annotating text information according to the first timestamp information; Text training information and sub-audio training information in the audio information.
在一可实施方式中,所述内容匹配所述标注文本信息和语音识别文本信息,包括:利用编辑距离算法对所述标注文本信息和语音识别文本信息进行文本相似度匹配;以所述标注文本信息作为基准,对相匹配的语音文本信息中的字/词进行文本对齐处理。In a possible implementation manner, the content matching the annotated text information and the speech recognition text information includes: using an edit distance algorithm to perform text similarity matching on the annotated text information and the speech recognition text information; The information is used as a reference, and text alignment processing is performed on the words/words in the matched speech and text information.
在一可实施方式中,所述根据所述第一时间戳信息生成对应于所述标注文本信息的第二时间戳信息,包括:从所述第一时间戳信息中获取所述语音识别文本信息中每个字/词信息所对应的起始时间戳信息和结尾时间戳信息;针对所述标注文本信息中每个字/词信息,复制对应于所述语音识别文本信息中相匹配字/词信息的起始时间戳信息和结尾时间戳信息,生成对应于所述标注文本信息的第二时间戳信息。In a possible implementation manner, the generating second time stamp information corresponding to the marked text information according to the first time stamp information includes: acquiring the speech recognition text information from the first time stamp information For each word/word information in the marked text information, copy the corresponding words/words in the speech recognition text information The start time stamp information and the end time stamp information of the information are used to generate second time stamp information corresponding to the marked text information.
在一可实施方式中,在内容匹配所述标注文本信息和语音识别文本信息之前,所述方法包括:通过语音识别系统获取所述语音识别文本信息中字/词信息所对应的置信度;根据每个所述字/词信息的置信度,检测并替换所述标注文本信息中所对应的字/词信息。In a possible implementation manner, before the content matches the marked text information and the speech recognition text information, the method includes: obtaining the confidence level corresponding to the word/word information in the speech recognition text information through a speech recognition system; Each confidence level of the word/word information is detected and replaced with the corresponding word/word information in the marked text information.
在一可实施方式中,所述根据所述第二时间戳信息,获取所述标注文本信息中的子文本训练信息和所述音频信息中的子音频训练信息,包括:对所述标注文本信息根据设定字符或者指定字符数量拆分为多个子文本训练信息,并从所述第二时间戳信息中分别获取多个所述子文本训练信息所对应的起始时间戳和结尾时间戳信息;根据多个所述子文本训练信息所对应的起始时间戳和结尾时间戳信息,将所述音频信息拆分为多个子音频训练信息。In a possible implementation manner, the acquiring, according to the second timestamp information, the sub-text training information in the annotated text information and the sub-audio training information in the audio information includes: According to the set character or the specified number of characters, it is divided into a plurality of sub-text training information, and respectively obtain a plurality of start timestamps and end timestamp information corresponding to the sub-text training information from the second timestamp information; The audio information is split into a plurality of sub-audio training information according to the start timestamp and end timestamp information corresponding to the plurality of sub-text training information.
在一可实施方式中,在生成对应于所述音频信息的语音识别文本信息和第一时间戳信息之前,所述方法还包括:将所述标注文本信息输入于语音识别系统中的语言模型进行训练,或者在语音识别系统进行解码时动态增加所述标注文本信息的概率值。In a possible implementation manner, before generating the speech recognition text information and the first time stamp information corresponding to the audio information, the method further comprises: inputting the marked text information into a language model in the speech recognition system for performing the processing. training, or dynamically increase the probability value of the marked text information when the speech recognition system performs decoding.
本发明另一方面提供一种训练数据生成装置,所述装置包括:信息接收模块,用于接收音频信息和对应的标注文本信息;第一信息生成模块,用于生成对应于所述音频信息的语音识别文本信息和第一时间戳信息;第二信息生成模块,用于内容匹配所述标注文本信息和语音识别文本信息,根据所述第一时间戳信息生成对应于所述标注文本信息的第二时间戳信息;训练数据生成模块,用于根据所述第二时间戳信息,获取所述标注文本信息中的子文本训练信息和所述音频信息中的子音频训练信息。Another aspect of the present invention provides an apparatus for generating training data, the apparatus comprising: an information receiving module for receiving audio information and corresponding marked text information; a first information generating module for generating a training data corresponding to the audio information Speech recognition text information and first time stamp information; a second information generation module for content matching the marked text information and speech recognition text information, and generating a first time stamp corresponding to the marked text information according to the first time stamp information; Second timestamp information; a training data generation module, configured to acquire sub-text training information in the marked text information and sub-audio training information in the audio information according to the second timestamp information.
在一可实施方式中,所述第二信息生成模块具体用于:利用编辑距离算法对所述标注文本信息和语音识别文本信息进行文本相似度匹配;以所述标注文本信息作为基准,对相匹配的语音文本信息中的字/词进行文本对齐处理。In a possible implementation manner, the second information generation module is specifically configured to: use an edit distance algorithm to perform text similarity matching between the annotated text information and the speech recognition text information; The words/words in the matched voice text information are processed for text alignment.
在一可实施方式中,所述训练数据生成模块具体用于:对所述标注文本信息根据设定字符或者指定字符数量拆分为多个子文本训练信息,并从所述第二时间戳信息中分别获取多个所述子文本训练信息所对应的起始时间戳和结尾时间戳信息;根据多个所述子文本训练信息所对应的起始时间戳和结尾时间戳信息,将所述音频信息拆分为多个子音频训练信息。In a possible implementation manner, the training data generation module is specifically configured to: split the labeled text information into multiple sub-text training information according to a set character or a specified number of characters, and extract the data from the second timestamp information. Respectively obtain the start time stamp and end time stamp information corresponding to the plurality of sub-text training information; according to the start time stamp and end time stamp information corresponding to the plurality of sub-text training information, Split into multiple sub-audio training information.
本发明另一方面提供一种计算机可读存储介质,所述存储介质包括一组计算机可执行指令,当所述指令被执行时用于执行上述任一项所述的训练数据生成方法。Another aspect of the present invention provides a computer-readable storage medium, the storage medium comprising a set of computer-executable instructions for performing the training data generation method described in any one of the above when the instructions are executed.
在本发明实施例中,通过获取原始的音频信息以及标注文本信息,利用音频信息的时间戳信息从原始的音频信息以及标注文本信息中获取多个子音频训练信息和对应的子文本训练信息,从而得到大量并且高质量的语音训练数据,此过程效率高并且降低了耗费成本。In the embodiment of the present invention, by obtaining the original audio information and the marked text information, the timestamp information of the audio information is used to obtain a plurality of sub-audio training information and corresponding sub-text training information from the original audio information and the marked text information, thereby Obtaining a large amount of high-quality speech training data, this process is efficient and reduces the cost.
附图说明Description of drawings
通过参考附图阅读下文的详细描述,本发明示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本发明的若干实施方式,其中:The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily understood by reading the following detailed description with reference to the accompanying drawings. In the accompanying drawings, several embodiments of the present invention are shown by way of example and not limitation, wherein:
在附图中,相同或对应的标号表示相同或对应的部分。In the drawings, the same or corresponding reference numerals denote the same or corresponding parts.
图1为本发明实施例一种训练数据生成方法的实现流程示意图;1 is a schematic diagram of an implementation flow of a method for generating training data according to an embodiment of the present invention;
图2为本发明实施例一种训练数据生成装置的结构组成示意图。FIG. 2 is a schematic structural composition diagram of an apparatus for generating training data according to an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明的目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative efforts shall fall within the protection scope of the present invention.
图1为本发明实施例一种训练数据生成方法的实现流程示意图。FIG. 1 is a schematic diagram of an implementation flowchart of a method for generating training data according to an embodiment of the present invention.
如图1所示,本发明一方面提供一种训练数据生成方法,方法包括:As shown in FIG. 1, one aspect of the present invention provides a method for generating training data, the method comprising:
步骤101,接收音频信息和对应的标注文本信息;
步骤102,生成对应于音频信息的语音识别文本信息和第一时间戳信息;
步骤103,内容匹配标注文本信息和语音识别文本信息,生成对应于标注文本信息的第二时间戳信息,其中,第二时间戳信息与第一时间戳信息相对应;
步骤104,根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息。Step 104: Acquire sub-text training information in the marked text information and sub-audio training information in the audio information according to the second timestamp information.
本实施例中,在步骤101中,音频信息和对应的标注文本信息优选为长音频和长标注文本信息,可以是有声书、演讲音频、访谈记录等等,其获取方式可以通过爬虫技术从网络上抓取或者从本地数据库中获取。In this embodiment, in
在步骤102中,语音识别文本信息和第一时间戳信息可以通过将所接收到的音频信息输入于现有的语音识别系统或者通过人工测量识别得到;第一时间戳信息包括对应于语音识别文本信息中每个字或词的起始和结尾时间戳信息,例如标注文本信息为:“天很热,地球南极的冰川都陷落了”,假设语音识别文本信息为“天很热地球南极冰川都显露”的时间戳信息可能为:In
天很热:[天,19.83,20.49],[很,20.49,20.79],[热,20.79,21.00];It is very hot: [day, 19.83, 20.49], [very, 20.49, 20.79], [hot, 20.79, 21.00];
地球南极:[地球,21.90,22.05],[南极,22.05,22.62];Earth South Pole: [Earth, 21.90, 22.05], [South Pole, 22.05, 22.62];
冰川显露:[冰川,23.67,24.00],[显露,24.00,24.24]。Glacier Reveal: [Glacier, 23.67, 24.00], [ Reveal, 24.00, 24.24].
在步骤103中,将标注文本信息和语音识别文本信息进行内容匹配,使生成对应于标注文本信息的第二时间戳信息。In
接着在步骤104中,根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息。Next, in
由此,通过获取原始的音频信息以及标注文本信息,利用音频信息的时间戳信息从原始的音频信息以及标注文本信息中获取多个子音频训练信息和对应的子文本训练信息,从而得到大量并且高质量的语音训练数据,此过程效率高并且降低了耗费成本。Therefore, by obtaining the original audio information and the labeled text information, and using the timestamp information of the audio information to obtain a plurality of sub-audio training information and corresponding sub-text training information from the original audio information and the labeled text information, a large number of high-quality audio sub-text training information can be obtained. High-quality speech training data, the process is efficient and cost-effective.
在一可实施方式中,内容匹配标注文本信息和语音识别文本信息,包括:In an embodiment, the content matches the annotation text information and the speech recognition text information, including:
利用编辑距离算法对标注文本信息和语音识别文本信息进行文本相似度匹配;Use edit distance algorithm to perform text similarity matching between labeled text information and speech recognition text information;
以标注文本信息作为基准,对相匹配的语音文本信息中的字/词进行文本对齐处理。Based on the marked text information, text alignment processing is performed on the words/words in the matched voice text information.
本实施例中,编辑距离算法是指两个字串之间,由一个转成另一个所需的最少编辑操作次数,如果它们的距离越大,说明它们相似度越低。In this embodiment, the edit distance algorithm refers to the minimum number of editing operations required to convert two character strings from one to the other, and the larger the distance between them, the lower the similarity is.
在相似度匹配时,具体可以将标注文本信息和语音识别文本信息分别根据标点符号或者分词工具拆分为多个长语句信息或者词级别语句信息,将标注文本信息和语音识别文本信息中的长语句信息或者词级别语句信息进行两两相似度匹配,选取相似度最高的两个语句信息认定为相匹配。When the similarity is matched, the labeled text information and the speech recognition text information can be split into multiple long sentence information or word-level sentence information according to punctuation marks or word segmentation tools, respectively, and the long sentence information in the labeled text information and the speech recognition text information can be split. The sentence information or word-level sentence information is matched for pairwise similarity, and the two sentences with the highest similarity are selected as matching.
接着进行文本对齐处理,在处理过程中,可以首先利用现有的分词工具对相匹配的语音文本信息进行分词处理,得到多个字/词信息,接着以标注文本信息为基准,将语音文本信息中的字/词分别与标注文本信息中的字/词相对应,未对齐的部分可以通过增设特定符号进行填充,以此完成内容匹配。针对上述所举的例子,对齐后表示为:Then perform text alignment processing. In the processing process, you can first use the existing word segmentation tool to perform word segmentation processing on the matching voice text information to obtain multiple word/word information, and then use the marked text information as the benchmark to classify the voice text information. The words/words in the text correspond to the words/words in the marked text information, and the unaligned parts can be filled by adding specific symbols to complete content matching. For the above example, the alignment is expressed as:
标注文本:天很热,地球南极的冰川都陷落了;Label text: It is very hot, and the glaciers in the south pole of the earth have collapsed;
识别文本:天很热__地球南极__冰川都显露__(下划线符号表示空白)。Identifying text: It's hot __Earth's South Pole__ glaciers are all exposed__ (underscores indicate blanks).
在一可实施方式中,根据第一时间戳信息生成对应于标注文本信息的第二时间戳信息,包括:In a possible implementation manner, generating the second time stamp information corresponding to the marked text information according to the first time stamp information includes:
从第一时间戳信息中获取语音识别文本信息中每个字/词信息所对应的起始时间戳信息和结尾时间戳信息;Obtain the start time stamp information and the end time stamp information corresponding to each word/word information in the speech recognition text information from the first time stamp information;
针对标注文本信息中每个字/词信息,复制对应于语音识别文本信息中相匹配字/词信息的起始时间戳信息和结尾时间戳信息,生成对应于标注文本信息的第二时间戳信息。For each word/word information in the marked text information, copy the start time stamp information and the end time stamp information corresponding to the matching word/word information in the speech recognition text information, and generate the second time stamp information corresponding to the marked text information .
本实施例中,第二时间戳信息的生成过程具体为:In this embodiment, the generation process of the second time stamp information is specifically:
在内容匹配完成之后,获取语音识别文本信息中每个字/词信息所对应的起始时间戳信息和结尾时间戳信息,将所获取的起始时间戳信息和结尾时间戳信息按照字符索引复制给标注文本信息中对应索引位置的字/词,从生成了对应于标注文本信息的第二时间戳信息。After the content matching is completed, obtain the start time stamp information and end time stamp information corresponding to each word/word information in the speech recognition text information, and copy the obtained start time stamp information and end time stamp information according to the character index For the word/word corresponding to the index position in the marked text information, second timestamp information corresponding to the marked text information is generated.
在一可实施方式中,在内容匹配标注文本信息和语音识别文本信息之前,方法包括:In a possible implementation manner, before the content matches the annotation text information and the speech recognition text information, the method includes:
通过语音识别系统获取语音识别文本信息中字/词信息所对应的置信度;Obtain the confidence level corresponding to the word/word information in the speech recognition text information through the speech recognition system;
根据每个字/词信息的置信度,检测并替换标注文本信息中所对应的字/词信息。According to the confidence of each word/word information, the corresponding word/word information in the annotated text information is detected and replaced.
本实施例中,在步骤101所获取的标注文本信息可能存在错误,如“天很热地球南极冰川都显露”的“显露”为识别错误。因此在执行103步骤之前,在利用语音识别系统生成语音识别文本信息的同时获取到语音识别系统中每个字/词的置信度;In this embodiment, there may be errors in the marked text information obtained in
若每个字/词的置信度超过预设阈值,则认定该字/词的准确率较高,此时检测并判断标注文本信息中对应的字/词是否内容一致,若判定标注文本信息中对应的字/词内容不一致,则将替换标注文本信息中所对应的字/词信息,如将“天很热地球南极冰川都显露”替换为“天很热地球南极冰川都陷落”。通过此步骤,可以减少上述进行编辑距离算法时的计算量,进而提高运行效率。If the confidence of each word/word exceeds the preset threshold, it is determined that the accuracy of the word/word is high, and at this time, it is detected and judged whether the corresponding word/word in the marked text information is consistent in content. If the content of the corresponding words/words is inconsistent, the corresponding word/word information in the marked text information will be replaced, for example, "the sky is very hot and the Antarctic glaciers on the earth are exposed" with "the sky is very hot and the Antarctic glaciers on the earth are sinking". Through this step, the amount of calculation in the above-mentioned edit distance algorithm can be reduced, thereby improving the operation efficiency.
在一可实施方式中,根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息,包括:In a possible implementation manner, obtaining sub-text training information in the marked text information and sub-audio training information in the audio information according to the second timestamp information, including:
对标注文本信息根据设定字符或者指定字符数量拆分为多个子文本训练信息,并从第二时间戳信息中分别获取多个子文本训练信息所对应的起始时间戳和结尾时间戳信息;Splitting the labeled text information into a plurality of sub-text training information according to the set characters or the specified number of characters, and respectively obtaining the start timestamp and end timestamp information corresponding to the plurality of sub-text training information from the second timestamp information;
根据多个子文本训练信息所对应的起始时间戳和结尾时间戳信息,将音频信息拆分为多个子音频训练信息。The audio information is split into multiple sub-audio training information according to the start timestamp and end timestamp information corresponding to the multiple sub-text training information.
本实施例中,步骤104的具体过程为:In this embodiment, the specific process of
在生成第二时间戳信息之后,检测标注文本信息中的标点符号索引位置或者根据指定字符数量定位到所需切割的索引位置,按照索引位置将标注文本信息拆分为多个子文本训练信息,如将“天很热,地球南极的冰川都陷落了”分为“天很热”和“地球南极的冰川都陷落了”。After the second timestamp information is generated, the index position of the punctuation mark in the marked text information is detected or the index position to be cut is located according to the specified number of characters, and the marked text information is divided into multiple sub-text training information according to the index position, such as Divide "it is very hot, the glaciers in the south pole of the earth have collapsed" into "it is very hot" and "the glaciers in the south pole of the earth have collapsed".
接着从第二时间戳信息中获取每个子文本训练信息中的起始时间戳信息和结尾时间戳信息,如“天很热”的[19.83,21.00]。Then, the starting timestamp information and the ending timestamp information in the training information of each sub-text are obtained from the second timestamp information, such as [19.83, 21.00] for "it's very hot".
将音频信息按照子文本训练信息的起始时间戳信息和结尾时间戳信息进行拆分,获取到多个对应于子文本训练信息的字音频信息,将子文本训练信息和对应的字音频信息作为训练数据。Split the audio information according to the starting timestamp information and the ending timestamp information of the sub-text training information, obtain a plurality of word audio information corresponding to the sub-text training information, and use the sub-text training information and the corresponding word audio information as training data.
在一可实施方式中,在生成对应于音频信息的语音识别文本信息和第一时间戳信息之前,方法还包括:In a possible implementation manner, before generating the speech recognition text information and the first timestamp information corresponding to the audio information, the method further includes:
将标注文本信息输入于语音识别系统中的语言模型进行训练,或者在语音识别系统进行解码时动态增加标注文本信息的概率值。Input the marked text information into the language model in the speech recognition system for training, or dynamically increase the probability value of the marked text information when the speech recognition system decodes.
本实施例中,考虑到通过语音识别系统所得到的语音识别文本信息可能准确度不高,因此在执行步骤102之前,将所获取的标注文本信息输入于语音识别系统中的语言模型中进行训练,或者在语音识别系统针对该音频信息进行解码过程中动态增加对生成标注文本信息的概率值,以提高语音识别系统识别该音频信息的准确率。In this embodiment, considering that the text information of speech recognition obtained by the speech recognition system may not have high accuracy, before
图2为本发明实施例一种训练数据生成装置的结构组成示意图。FIG. 2 is a schematic structural composition diagram of an apparatus for generating training data according to an embodiment of the present invention.
如图2所示,本发明另一方面提供一种训练数据生成装置,装置包括:As shown in FIG. 2, another aspect of the present invention provides a training data generating device, the device comprising:
信息接收模块201,用于接收音频信息和对应的标注文本信息;an
第一信息生成模块202,用于生成对应于音频信息的语音识别文本信息和第一时间戳信息;a first
第二信息生成模块203,用于内容匹配标注文本信息和语音识别文本信息,根据第一时间戳信息生成对应于标注文本信息的第二时间戳信息;The second
训练数据生成模块204,用于根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息。The training
本实施例中,在信息接收模块201中,音频信息和对应的标注文本信息优选为长音频和长标注文本信息,可以是有声书、演讲音频、访谈记录等等,其获取方式可以通过爬虫技术从网络上抓取或者从本地数据库中获取。In this embodiment, in the
在第一信息生成模块202中,语音识别文本信息和第一时间戳信息可以通过将所接收到的音频信息输入于现有的语音识别系统或者通过人工测量识别得到;第一时间戳信息包括对应于语音识别文本信息中每个字或词的起始和结尾时间戳信息,例如标注文本信息为:“天很热,地球南极的冰川都陷落了”,假设语音识别文本信息为“天很热地球南极冰川都显露”的时间戳信息可能为:In the first
天很热:[天,19.83,20.49],[很,20.49,20.79],[热,20.79,21.00];It is very hot: [day, 19.83, 20.49], [very, 20.49, 20.79], [hot, 20.79, 21.00];
地球南极:[地球,21.90,22.05],[南极,22.05,22.62];Earth South Pole: [Earth, 21.90, 22.05], [South Pole, 22.05, 22.62];
冰川显露:[冰川,23.67,24.00],[显露,24.00,24.24]。Glacier Reveal: [Glacier, 23.67, 24.00], [ Reveal, 24.00, 24.24].
在第二信息生成模块203中,将标注文本信息和语音识别文本信息进行内容匹配,使生成对应于标注文本信息的第二时间戳信息。In the second
接着在训练数据生成模块204中,根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息。Next, in the training
由此,通过获取原始的音频信息以及标注文本信息,利用音频信息的时间戳信息从原始的音频信息以及标注文本信息中获取多个子音频训练信息和对应的子文本训练信息,从而得到大量并且高质量的语音训练数据,此过程效率高并且降低了耗费成本。Therefore, by obtaining the original audio information and the labeled text information, and using the timestamp information of the audio information to obtain a plurality of sub-audio training information and corresponding sub-text training information from the original audio information and the labeled text information, a large number of high-quality audio sub-text training information can be obtained. High-quality speech training data, the process is efficient and cost-effective.
在一可实施方式中,第二信息生成模块203具体用于:In a possible implementation manner, the second
利用编辑距离算法对标注文本信息和语音识别文本信息进行文本相似度匹配;Use edit distance algorithm to perform text similarity matching between labeled text information and speech recognition text information;
以标注文本信息作为基准,对相匹配的语音文本信息中的字/词进行文本对齐处理。Based on the marked text information, text alignment processing is performed on the words/words in the matched voice text information.
本实施例中,编辑距离算法是指两个字串之间,由一个转成另一个所需的最少编辑操作次数,如果它们的距离越大,说明它们相似度越低。In this embodiment, the edit distance algorithm refers to the minimum number of editing operations required to convert two character strings from one to the other, and the larger the distance between them, the lower the similarity is.
在相似度匹配时,具体可以将标注文本信息和语音识别文本信息分别根据标点符号或者分词工具拆分为多个长语句信息或者词级别语句信息,将标注文本信息和语音识别文本信息中的长语句信息或者词级别语句信息进行两两相似度匹配,选取相似度最高的两个语句信息认定为相匹配。When the similarity is matched, the labeled text information and the speech recognition text information can be split into multiple long sentence information or word-level sentence information according to punctuation marks or word segmentation tools, respectively, and the long sentence information in the labeled text information and the speech recognition text information can be split. The sentence information or word-level sentence information is matched for pairwise similarity, and the two sentences with the highest similarity are selected as matching.
接着进行文本对齐处理,在处理过程中,可以首先利用现有的分词工具对相匹配的语音文本信息进行分词处理,得到多个字/词信息,接着以标注文本信息为基准,将语音文本信息中的字/词分别与标注文本信息中的字/词相对应,未对齐的部分可以通过增设特定符号进行填充,以此完成内容匹配。针对上述所举的例子,对齐后表示为:Then perform text alignment processing. In the processing process, you can first use the existing word segmentation tool to perform word segmentation processing on the matching voice text information to obtain multiple word/word information, and then use the marked text information as the benchmark to classify the voice text information. The words/words in the text correspond to the words/words in the marked text information, and the unaligned parts can be filled by adding specific symbols to complete content matching. For the above example, the alignment is expressed as:
标注文本:天很热,地球南极的冰川都陷落了;Label text: It is very hot, and the glaciers in the south pole of the earth have collapsed;
识别文本:天很热__地球南极__冰川都显露__(下划线符号表示空白)。Identifying text: It's hot __Earth's South Pole__ glaciers are all exposed__ (underscores indicate blanks).
在一可实施方式中,训练数据生成模块204具体用于:In a possible implementation manner, the training
对标注文本信息根据设定字符或者指定字符数量拆分为多个子文本训练信息,并从第二时间戳信息中分别获取多个子文本训练信息所对应的起始时间戳和结尾时间戳信息;Splitting the labeled text information into a plurality of sub-text training information according to the set characters or the specified number of characters, and respectively obtaining the start timestamp and end timestamp information corresponding to the plurality of sub-text training information from the second timestamp information;
根据多个子文本训练信息所对应的起始时间戳和结尾时间戳信息,将音频信息拆分为多个子音频训练信息。The audio information is split into multiple sub-audio training information according to the start timestamp and end timestamp information corresponding to the multiple sub-text training information.
本实施例中,训练数据生成模块204具体用于:In this embodiment, the training
在生成第二时间戳信息之后,检测标注文本信息中的标点符号索引位置或者根据指定字符数量定位到所需切割的索引位置,按照索引位置将标注文本信息拆分为多个子文本训练信息,如将“天很热,地球南极的冰川都陷落了”分为“天很热”和“地球南极的冰川都陷落了”。After the second timestamp information is generated, the index position of the punctuation mark in the marked text information is detected or the index position to be cut is located according to the specified number of characters, and the marked text information is divided into multiple sub-text training information according to the index position, such as Divide "it is very hot, the glaciers in the south pole of the earth have collapsed" into "it is very hot" and "the glaciers in the south pole of the earth have collapsed".
接着从第二时间戳信息中获取每个子文本训练信息中的起始时间戳信息和结尾时间戳信息,如“天很热”的[19.83,21.00]。Then, the starting timestamp information and the ending timestamp information in the training information of each sub-text are obtained from the second timestamp information, such as [19.83, 21.00] for "it's very hot".
将音频信息按照子文本训练信息的起始时间戳信息和结尾时间戳信息进行拆分,获取到多个对应于子文本训练信息的字音频信息,将子文本训练信息和对应的字音频信息作为训练数据。Split the audio information according to the starting timestamp information and the ending timestamp information of the sub-text training information, obtain a plurality of word audio information corresponding to the sub-text training information, and use the sub-text training information and the corresponding word audio information as training data.
本发明另一方面提供一种计算机可读存储介质,存储介质包括一组计算机可执行指令,当指令被执行时用于执行上述任一项的训练数据生成方法。Another aspect of the present invention provides a computer-readable storage medium, the storage medium comprising a set of computer-executable instructions, when executed, for performing any of the above-mentioned training data generation methods.
在本发明实施例中计算机可读存储介质包括一组计算机可执行指令,当指令被执行时用于,接收音频信息和对应的标注文本信息;生成对应于音频信息的语音识别文本信息和第一时间戳信息;内容匹配标注文本信息和语音识别文本信息,生成对应于标注文本信息的第二时间戳信息,其中,第二时间戳信息与第一时间戳信息相对应;根据第二时间戳信息,获取标注文本信息中的子文本训练信息和音频信息中的子音频训练信息。由此,通过获取原始的音频信息以及标注文本信息,利用音频信息的时间戳信息从原始的音频信息以及标注文本信息中获取多个子音频训练信息和对应的子文本训练信息,从而得到大量并且高质量的语音训练数据,此过程效率高并且降低了耗费成本。In this embodiment of the present invention, the computer-readable storage medium includes a set of computer-executable instructions, which, when the instructions are executed, are used to receive audio information and corresponding marked text information; to generate speech recognition text information corresponding to the audio information and first timestamp information; the content matches the marked text information and the speech recognition text information, and generates second timestamp information corresponding to the marked text information, wherein the second timestamp information corresponds to the first timestamp information; according to the second timestamp information , to obtain the sub-text training information in the labeled text information and the sub-audio training information in the audio information. Therefore, by obtaining the original audio information and the labeled text information, and using the timestamp information of the audio information to obtain a plurality of sub-audio training information and corresponding sub-text training information from the original audio information and the labeled text information, a large number of high-quality audio sub-text training information can be obtained. High-quality speech training data, the process is efficient and cost-effective.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present invention, "plurality" means two or more, unless otherwise expressly and specifically defined.
以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art who is familiar with the technical scope disclosed by the present invention can easily think of changes or replacements, which should cover within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738406.5A CN112037769B (en) | 2020-07-28 | 2020-07-28 | Training data generation method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010738406.5A CN112037769B (en) | 2020-07-28 | 2020-07-28 | Training data generation method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112037769A true CN112037769A (en) | 2020-12-04 |
CN112037769B CN112037769B (en) | 2024-07-30 |
Family
ID=73583359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010738406.5A Active CN112037769B (en) | 2020-07-28 | 2020-07-28 | Training data generation method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112037769B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112599152A (en) * | 2021-03-05 | 2021-04-02 | 北京智慧星光信息技术有限公司 | Voice data labeling method, system, electronic equipment and storage medium |
CN113129935A (en) * | 2021-06-16 | 2021-07-16 | 北京新唐思创教育科技有限公司 | Audio dotting data acquisition method and device, storage medium and electronic equipment |
CN113539241A (en) * | 2021-07-28 | 2021-10-22 | 广州华多网络科技有限公司 | Speech recognition correction method and corresponding device, equipment and medium |
CN113781994A (en) * | 2021-01-20 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Training set generation method and device, electronic equipment and computer readable medium |
CN117594060A (en) * | 2023-10-31 | 2024-02-23 | 北京邮电大学 | Audio signal content analysis method, device, equipment and storage medium |
CN117975934A (en) * | 2023-12-31 | 2024-05-03 | 上海稀宇极智科技有限公司 | Method and device for obtaining audio text pairs, electronic device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6260011B1 (en) * | 2000-03-20 | 2001-07-10 | Microsoft Corporation | Methods and apparatus for automatically synchronizing electronic audio files with electronic text files |
CN108389577A (en) * | 2018-02-12 | 2018-08-10 | 广州视源电子科技股份有限公司 | Method, system, device and storage medium for optimizing speech recognition acoustic model |
CN110310626A (en) * | 2019-05-23 | 2019-10-08 | 平安科技(深圳)有限公司 | Speech training data generation method, device, equipment and readable storage medium |
CN110516110A (en) * | 2019-07-22 | 2019-11-29 | 平安科技(深圳)有限公司 | Song generation method, device, computer equipment and storage medium |
CN111091834A (en) * | 2019-12-23 | 2020-05-01 | 科大讯飞股份有限公司 | Text and audio alignment method and related product |
-
2020
- 2020-07-28 CN CN202010738406.5A patent/CN112037769B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6260011B1 (en) * | 2000-03-20 | 2001-07-10 | Microsoft Corporation | Methods and apparatus for automatically synchronizing electronic audio files with electronic text files |
CN108389577A (en) * | 2018-02-12 | 2018-08-10 | 广州视源电子科技股份有限公司 | Method, system, device and storage medium for optimizing speech recognition acoustic model |
CN110310626A (en) * | 2019-05-23 | 2019-10-08 | 平安科技(深圳)有限公司 | Speech training data generation method, device, equipment and readable storage medium |
CN110516110A (en) * | 2019-07-22 | 2019-11-29 | 平安科技(深圳)有限公司 | Song generation method, device, computer equipment and storage medium |
CN111091834A (en) * | 2019-12-23 | 2020-05-01 | 科大讯飞股份有限公司 | Text and audio alignment method and related product |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113781994A (en) * | 2021-01-20 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Training set generation method and device, electronic equipment and computer readable medium |
CN112599152A (en) * | 2021-03-05 | 2021-04-02 | 北京智慧星光信息技术有限公司 | Voice data labeling method, system, electronic equipment and storage medium |
CN113129935A (en) * | 2021-06-16 | 2021-07-16 | 北京新唐思创教育科技有限公司 | Audio dotting data acquisition method and device, storage medium and electronic equipment |
CN113539241A (en) * | 2021-07-28 | 2021-10-22 | 广州华多网络科技有限公司 | Speech recognition correction method and corresponding device, equipment and medium |
CN113539241B (en) * | 2021-07-28 | 2023-04-25 | 广州华多网络科技有限公司 | Speech recognition correction method and corresponding device, equipment and medium thereof |
CN117594060A (en) * | 2023-10-31 | 2024-02-23 | 北京邮电大学 | Audio signal content analysis method, device, equipment and storage medium |
CN117975934A (en) * | 2023-12-31 | 2024-05-03 | 上海稀宇极智科技有限公司 | Method and device for obtaining audio text pairs, electronic device, and storage medium |
CN117975934B (en) * | 2023-12-31 | 2024-12-06 | 上海稀宇极智科技有限公司 | Method and device for obtaining audio text pairs, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112037769B (en) | 2024-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112037769B (en) | Training data generation method and device and computer readable storage medium | |
CN100511215C (en) | Multilingual translation memory and translation method thereof | |
CN110175334B (en) | Text knowledge extraction system and method based on custom knowledge slot structure | |
WO2017097166A1 (en) | Domain named entity recognition method and apparatus | |
JP5130892B2 (en) | Character encoding processing method and system | |
US6975985B2 (en) | Method and system for the automatic amendment of speech recognition vocabularies | |
CN112925563B (en) | A Source Code Recommendation Method for Code Reuse | |
US10410632B2 (en) | Input support apparatus and computer program product | |
CN110310626A (en) | Speech training data generation method, device, equipment and readable storage medium | |
CN112633001A (en) | Text named entity recognition method and device, electronic equipment and storage medium | |
CN110119510A (en) | A kind of Relation extraction method and device based on transmitting dependence and structural auxiliary word | |
CN118152520A (en) | Automatic rapid knowledge base construction method, system and device based on large language model technology | |
JP2009151777A (en) | Method and apparatus for aligning spoken language parallel corpus | |
CN106202255A (en) | Merge the Vietnamese name entity recognition method of physical characteristics | |
CN114372153A (en) | Structured legal document warehousing method and system based on knowledge graph | |
CN112257442A (en) | Policy document information extraction method based on corpus expansion neural network | |
CN110705261B (en) | Chinese text word segmentation method and system thereof | |
US20160328374A1 (en) | Methods and Data Structures for Improved Searchable Formatted Documents including Citation and Corpus Generation | |
JP2014229275A (en) | Query answering device and method | |
CN111368547B (en) | Entity recognition method, device, equipment and storage medium based on semantic analysis | |
CN112733517A (en) | Method for checking requirement template conformity, electronic equipment and storage medium | |
CN116362219A (en) | Information extraction template generation method and device, medium and equipment | |
CN112101019A (en) | Requirement template conformance checking optimization method based on part-of-speech tagging and chunk analysis | |
Partanen et al. | Transforming archived resources with language technology: From manuscripts to language documentation | |
CN114566151B (en) | Data annotation method, device, electronic device and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20241118 Address after: Room 2015, floor 2, No. 24, Lane 315, Fenggu Road, Xuhui District, Shanghai 200030 Patentee after: SHANGHAI MOBVOI INFORMATION TECHNOLOGY Co.,Ltd. Country or region after: China Address before: 100044 1001, 10th floor, office building a, 19 Zhongguancun Street, Haidian District, Beijing Patentee before: MOBVOI INFORMATION TECHNOLOGY Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |