CN111368504A - Voice data labeling method and device, electronic equipment and medium - Google Patents

Voice data labeling method and device, electronic equipment and medium Download PDF

Info

Publication number
CN111368504A
CN111368504A CN201911359418.0A CN201911359418A CN111368504A CN 111368504 A CN111368504 A CN 111368504A CN 201911359418 A CN201911359418 A CN 201911359418A CN 111368504 A CN111368504 A CN 111368504A
Authority
CN
China
Prior art keywords
voice data
text
labeling
features
automatic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911359418.0A
Other languages
Chinese (zh)
Inventor
颜雅玲
肖龙源
李稀敏
蔡振华
刘晓葳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Kuaishangtong Technology Co Ltd
Original Assignee
Xiamen Kuaishangtong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Kuaishangtong Technology Co Ltd filed Critical Xiamen Kuaishangtong Technology Co Ltd
Priority to CN201911359418.0A priority Critical patent/CN111368504A/en
Publication of CN111368504A publication Critical patent/CN111368504A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces

Abstract

The application provides a voice data labeling method and device, electronic equipment and a computer readable medium. Wherein the method comprises the following steps: receiving voice data to be marked, and performing voice recognition on the voice data to obtain a recognition text; acquiring a user confirmation text after the user confirms the identification text; extracting automatic labeling features from the recognition text and the user confirmation text; and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model. Because the automatic marking features are extracted and the voice data are marked according to the automatic marking features and the automatic marking model, the automatic marking of the voice data can be realized without manual marking, so that the problems of manual marking can be solved, the efficiency of voice data marking is improved, and the cost is reduced.

Description

Voice data labeling method and device, electronic equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for labeling voice data, an electronic device, and a computer-readable medium.
Background
With the increasing popularization of various intelligent terminals and the breakthrough of artificial intelligence technology, voice is used as an important link of human-computer interaction and widely applied to various intelligent terminals, more and more users are used to speak to machines, and voice input information is used according to application requirements to obtain responses of the machines.
In the related art, the voice data is usually labeled manually. However, with the wide application of the intelligent terminal, the acquired voice data is more and more, if the voice data is simply labeled manually, the requirement of mass voice data labeling cannot be met far away, the cost of manual labeling is higher, the labeling period is longer, the efficiency is lower, and the application requirement cannot be met obviously.
Disclosure of Invention
The application aims to provide a voice data labeling method and device, electronic equipment and a computer readable medium.
A first aspect of the present application provides a method for labeling voice data, including:
receiving voice data to be marked, and performing voice recognition on the voice data to obtain a recognition text;
acquiring a user confirmation text after the user confirms the identification text;
extracting automatic labeling features from the recognition text and the user confirmation text;
and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model.
In some possible implementations, the automatic tagging features include at least one of:
voiceprint features, grammatical features, semantic features.
In some possible implementations, the voiceprint features include at least one of:
and identifying text confidence characteristics and confirming the text confidence characteristics by a user.
In some possible implementations, the method further includes: training and generating an automatic labeling model by the following steps:
collecting data, said data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data;
extracting automatic labeling features from the recognition text and the user confirmation text;
and taking the automatic labeling characteristics and the manual labeling results as training data, training a neural network, and generating an automatic labeling model.
A second aspect of the present application provides a voice data labeling apparatus, including:
the receiving module is used for receiving voice data to be marked and carrying out voice recognition on the voice data to obtain a recognition text;
the acquisition module is used for acquiring a user confirmation text after the user confirms the identification text;
the extraction module is used for extracting automatic labeling characteristics from the identification text and the user confirmation text;
and the marking module is used for marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model.
In some possible implementations, the automatic tagging features include at least one of:
voiceprint features, grammatical features, semantic features.
In some possible implementations, the voiceprint features include at least one of:
and identifying text confidence characteristics and confirming the text confidence characteristics by a user.
In some possible implementations, the apparatus further includes:
a modeling module to collect data, the data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data; extracting automatic labeling features from the recognition text and the user confirmation text; and taking the automatic labeling characteristics and the manual labeling results as training data, training the neural network, and generating an automatic labeling model.
A third aspect of the present application provides an electronic device comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program when executing the computer program to perform the method of the first aspect of the application.
A fourth aspect of the present application provides a computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of the first aspect of the present application.
Compared with the prior art, the voice data labeling method, the voice data labeling device, the electronic equipment and the medium receive voice data to be labeled, and perform voice recognition on the voice data to obtain a recognition text; acquiring a user confirmation text after the user confirms the identification text; extracting automatic labeling features from the recognition text and the user confirmation text; and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model. Because the automatic marking features are extracted and the voice data are marked according to the automatic marking features and the automatic marking model, the automatic marking of the voice data can be realized without manual marking, so that the problems of manual marking can be solved, the efficiency of voice data marking is improved, and the cost is reduced.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a flow chart of a method for voice data annotation provided by some embodiments of the present application;
FIG. 2 illustrates a schematic diagram of a voice data annotation device provided in some embodiments of the present application;
fig. 3 illustrates a schematic diagram of an electronic device provided by some embodiments of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
In addition, the terms "first" and "second", etc. are used to distinguish different objects, rather than to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a voice data labeling method and device, electronic equipment and a computer readable medium, which are described below with reference to the accompanying drawings.
Referring to fig. 1, which illustrates a flowchart of a voice data annotation method according to some embodiments of the present application, as shown in the figure, the voice data annotation method may include the following steps:
step S101: and receiving voice data to be marked, and performing voice recognition on the voice data to obtain a recognition text.
In this embodiment, the voice to be subjected to voice annotation is defined as the voice data to be annotated, and the voice data to be annotated may be the voice data currently or previously input and recorded and stored by the user. Specifically, it may be voice data input by the user using a voice input function when using a recording application or a social application.
After receiving the voice data, the voice data may be recognized as text data by using a voice recognition engine, resulting in a recognized text.
Step S102: and acquiring a user confirmation text after the user confirms the identification text.
In this embodiment, the text confirmed by the user refers to a text finally used after the user confirms the recognition text. Specifically, the user may correct or otherwise modify the recognized text.
Step S103: and extracting automatic labeling features from the recognition text and the user confirmation text.
In this embodiment, the recognition text and the user confirmation text may be analyzed correspondingly from the aspects of voiceprint, grammar, semantics, and the like, and the automatic labeling feature may be extracted therefrom.
Thus, in some embodiments, the automatic tagging features may include at least one of:
voiceprint features, grammatical features, semantic features.
Specifically, the voiceprint feature mainly describes the credibility of the recognition text and the user confirmation text acoustically from the voice data, and specifically includes: identifying confidence characteristics of the text and confidence characteristics of the user confirmation text; the grammatical features mainly comprise the features of words expressed in three aspects of form, combining ability and sentence making function; the semantic features mainly describe semantically the similarity of the recognition text and the user confirmation text, and specifically include: a word vector of the recognized text, a word vector of the user-confirmed text, a word duration of the recognized text, a word duration of the user-confirmed text, and a word similarity between the recognized text and the user-confirmed text.
Step S104: and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model.
In this embodiment, an automatic labeling model may be pre-constructed, and the input and output of the automatic labeling model respectively represent the automatic labeling feature and the labeling information, so that after the automatic labeling feature is extracted, the labeling information with the highest probability is used as the labeling information of the voice data to be labeled according to the automatic labeling model. For example, a number, a letter, or the like may be employed as the labeling information.
Specifically, the automatic labeling model can be constructed in the following manner:
collecting data, said data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data;
extracting automatic labeling features from the recognition text and the user confirmation text;
and taking the automatic labeling characteristics and the manual labeling results as training data, training a neural network, and generating an automatic labeling model.
The voice data annotation method can be used for a client, and in the embodiment of the application, the client may include hardware or software. When the client includes hardware, it may be various electronic devices having a display screen and supporting information interaction, for example, and may include, but not be limited to, a smart phone, a tablet computer, a laptop portable computer, a desktop computer, and the like. When the client includes software, it may be installed in the electronic device, and it may be implemented as a plurality of software or software modules, or as a single software or software module. And is not particularly limited herein.
Compared with the prior art, the voice data labeling method provided by the embodiment of the application performs voice recognition on the voice data by receiving the voice data to be labeled to obtain a recognition text; acquiring a user confirmation text after the user confirms the identification text; extracting automatic labeling features from the recognition text and the user confirmation text; and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model. Because the automatic marking features are extracted and the voice data are marked according to the automatic marking features and the automatic marking model, the automatic marking of the voice data can be realized without manual marking, so that the problems of manual marking can be solved, the efficiency of voice data marking is improved, and the cost is reduced.
In the foregoing embodiment, a method for annotating voice data is provided, and correspondingly, a device for annotating voice data is also provided. The voice data labeling device provided by the embodiment of the application can implement the voice data labeling method, and the voice data labeling device can be implemented through software, hardware or a combination of software and hardware. For example, the voice data annotation device may comprise integrated or separate functional modules or units to perform the corresponding steps of the methods described above. Please refer to fig. 2, which illustrates a schematic diagram of a voice data annotation device according to some embodiments of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 2, the voice data labeling apparatus 10 may include:
the receiving module 101 is configured to receive voice data to be labeled, perform voice recognition on the voice data, and obtain a recognition text;
an obtaining module 102, configured to obtain a user confirmation text after the user confirms the identification text;
an extraction module 103, configured to extract an automatic labeling feature from the recognition text and the user confirmation text;
and the labeling module 104 is configured to label the voice data according to the automatic labeling feature and a pre-constructed automatic labeling model.
In some implementations of embodiments of the present application, the automatically labeling features include at least one of:
voiceprint features, grammatical features, semantic features.
In some implementations of embodiments of the present application, the voiceprint features include at least one of:
and identifying text confidence characteristics and confirming the text confidence characteristics by a user.
In some implementations of embodiments of the present application, the apparatus 10 further comprises:
a modeling module to collect data, the data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data; extracting automatic labeling features from the recognition text and the user confirmation text; and taking the automatic labeling characteristics and the manual labeling results as training data, training the neural network, and generating an automatic labeling model.
The voice data annotation device 10 provided in the embodiment of the present application and the voice data annotation method provided in the foregoing embodiment of the present application have the same beneficial effects and the same inventive concepts.
The embodiment of the present application further provides an electronic device corresponding to the voice data tagging method provided in the foregoing embodiment, where the electronic device may be an electronic device for a client, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, and the like, so as to execute the voice data tagging method.
Please refer to fig. 3, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 3, the electronic device 20 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the voice data annotation method provided by any one of the foregoing embodiments when executing the computer program.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is used for storing a program, and the processor 200 executes the program after receiving an execution instruction, and the method for labeling voice data disclosed in any of the foregoing embodiments of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the voice data labeling method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
The present application further provides a computer-readable medium corresponding to the voice data annotation method provided in the foregoing embodiment, and a computer program (i.e., a program product) is stored thereon, and when being executed by a processor, the computer program executes the voice data annotation method provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the voice data tagging method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present disclosure, and the present disclosure should be construed as being covered by the claims and the specification.

Claims (10)

1. A method for labeling voice data, comprising:
receiving voice data to be marked, and performing voice recognition on the voice data to obtain a recognition text;
acquiring a user confirmation text after the user confirms the identification text;
extracting automatic labeling features from the recognition text and the user confirmation text;
and marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model.
2. The method of claim 1, wherein the automatically labeling features comprises at least one of:
voiceprint features, grammatical features, semantic features.
3. The method of claim 2, wherein the voiceprint features comprise at least one of:
and identifying text confidence characteristics and confirming the text confidence characteristics by a user.
4. The method of claim 1, further comprising: training and generating an automatic labeling model by the following steps:
collecting data, said data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data;
extracting automatic labeling features from the recognition text and the user confirmation text;
and taking the automatic labeling characteristics and the manual labeling results as training data, training a neural network, and generating an automatic labeling model.
5. A speech data tagging apparatus, comprising:
the receiving module is used for receiving voice data to be marked and carrying out voice recognition on the voice data to obtain a recognition text;
the acquisition module is used for acquiring a user confirmation text after the user confirms the identification text;
the extraction module is used for extracting automatic labeling characteristics from the identification text and the user confirmation text;
and the marking module is used for marking the voice data according to the automatic marking characteristics and a pre-constructed automatic marking model.
6. The apparatus of claim 5, wherein the automatic labeling feature comprises at least one of:
voiceprint features, grammatical features, semantic features.
7. The apparatus of claim 6, wherein the voiceprint features comprise at least one of:
and identifying text confidence characteristics and confirming the text confidence characteristics by a user.
8. The apparatus of claim 1, further comprising:
a modeling module to collect data, the data comprising: the voice data comprises an identification text corresponding to the voice data, a user confirmation text corresponding to the voice data and an artificial labeling result corresponding to the voice data; extracting automatic labeling features from the recognition text and the user confirmation text; and taking the automatic labeling characteristics and the manual labeling results as training data, training the neural network, and generating an automatic labeling model.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor executes the computer program to implement the method according to any of claims 1 to 4.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 4.
CN201911359418.0A 2019-12-25 2019-12-25 Voice data labeling method and device, electronic equipment and medium Pending CN111368504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911359418.0A CN111368504A (en) 2019-12-25 2019-12-25 Voice data labeling method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911359418.0A CN111368504A (en) 2019-12-25 2019-12-25 Voice data labeling method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN111368504A true CN111368504A (en) 2020-07-03

Family

ID=71209997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911359418.0A Pending CN111368504A (en) 2019-12-25 2019-12-25 Voice data labeling method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111368504A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241445A (en) * 2020-10-26 2021-01-19 竹间智能科技(上海)有限公司 Labeling method and device, electronic equipment and storage medium
CN114441029A (en) * 2022-01-20 2022-05-06 深圳壹账通科技服务有限公司 Recording noise detection method, device, equipment and medium of voice labeling system
WO2022134832A1 (en) * 2020-12-23 2022-06-30 深圳壹账通智能科技有限公司 Address information extraction method, apparatus and device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492379A (en) * 2017-06-30 2017-12-19 百度在线网络技术(北京)有限公司 A kind of voice-print creation and register method and device
CN107578769A (en) * 2016-07-04 2018-01-12 科大讯飞股份有限公司 Speech data mask method and device
CN108875768A (en) * 2018-01-23 2018-11-23 北京迈格威科技有限公司 Data mask method, device and system and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578769A (en) * 2016-07-04 2018-01-12 科大讯飞股份有限公司 Speech data mask method and device
CN107492379A (en) * 2017-06-30 2017-12-19 百度在线网络技术(北京)有限公司 A kind of voice-print creation and register method and device
CN108875768A (en) * 2018-01-23 2018-11-23 北京迈格威科技有限公司 Data mask method, device and system and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241445A (en) * 2020-10-26 2021-01-19 竹间智能科技(上海)有限公司 Labeling method and device, electronic equipment and storage medium
CN112241445B (en) * 2020-10-26 2023-11-07 竹间智能科技(上海)有限公司 Labeling method and device, electronic equipment and storage medium
WO2022134832A1 (en) * 2020-12-23 2022-06-30 深圳壹账通智能科技有限公司 Address information extraction method, apparatus and device, and storage medium
CN114441029A (en) * 2022-01-20 2022-05-06 深圳壹账通科技服务有限公司 Recording noise detection method, device, equipment and medium of voice labeling system

Similar Documents

Publication Publication Date Title
JP6909832B2 (en) Methods, devices, equipment and media for recognizing important words in audio
CN110659366A (en) Semantic analysis method and device, electronic equipment and storage medium
US20180277097A1 (en) Method and device for extracting acoustic feature based on convolution neural network and terminal device
CN110888968A (en) Customer service dialogue intention classification method and device, electronic equipment and medium
CN111368504A (en) Voice data labeling method and device, electronic equipment and medium
CN110597952A (en) Information processing method, server, and computer storage medium
WO2016101717A1 (en) Touch interaction-based search method and device
CN109979450B (en) Information processing method and device and electronic equipment
US20170116521A1 (en) Tag processing method and device
CN110718226A (en) Speech recognition result processing method and device, electronic equipment and medium
CN106156794B (en) Character recognition method and device based on character style recognition
CN111291551B (en) Text processing method and device, electronic equipment and computer readable storage medium
CN112397051A (en) Voice recognition method and device and terminal equipment
CN112084752A (en) Statement marking method, device, equipment and storage medium based on natural language
CN111783471A (en) Semantic recognition method, device, equipment and storage medium of natural language
CN114860905A (en) Intention identification method, device and equipment
CN107506407B (en) File classification and calling method and device
CN112669850A (en) Voice quality detection method and device, computer equipment and storage medium
CN114637831A (en) Data query method based on semantic analysis and related equipment thereof
CN114218364A (en) Question-answer knowledge base expansion method and device
CN110276001B (en) Checking page identification method and device, computing equipment and medium
CN112364131A (en) Corpus processing method and related device thereof
CN110895924B (en) Method and device for reading document content aloud, electronic equipment and readable storage medium
CN113763947A (en) Voice intention recognition method and device, electronic equipment and storage medium
CN112328308A (en) Method and device for recognizing text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination