CN112560453A - Voice information verification method and device, electronic equipment and medium - Google Patents
Voice information verification method and device, electronic equipment and medium Download PDFInfo
- Publication number
- CN112560453A CN112560453A CN202011508407.7A CN202011508407A CN112560453A CN 112560453 A CN112560453 A CN 112560453A CN 202011508407 A CN202011508407 A CN 202011508407A CN 112560453 A CN112560453 A CN 112560453A
- Authority
- CN
- China
- Prior art keywords
- data set
- text
- preset
- value
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012795 verification Methods 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 121
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims abstract description 46
- 238000004364 calculation method Methods 0.000 claims abstract description 31
- 238000002372 labelling Methods 0.000 claims abstract description 21
- 238000012216 screening Methods 0.000 claims description 27
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000717 retained effect Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013524 data verification Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/242—Dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/237—Lexical tools
- G06F40/247—Thesauruses; Synonyms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to a voice processing technology, and discloses a voice information checking method, which comprises the following steps: carrying out voice recognition, labeling data, text enhancement and probability calculation on an original voice data set to obtain a classification probability value and calculating a cross entropy loss value with a real probability value, and obtaining a standard text detection model when the cross entropy loss value is smaller than a loss threshold value; and performing probability calculation on the voice data to be processed to obtain a prediction probability value set, summarizing the prediction probability values which are greater than a probability threshold value to obtain a prediction result list, extracting categories corresponding to the prediction probability values, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold value. The invention also relates to blockchain techniques, where the training data sets, etc. may be stored in blockchain nodes. The invention also discloses a voice information checking device, electronic equipment and a storage medium. The invention can quickly and accurately check the content contained in the voice.
Description
Technical Field
The present invention relates to the field of voice processing technologies, and in particular, to a method and an apparatus for verifying voice information, an electronic device, and a computer-readable storage medium.
Background
In some cases, the recording is performed in life work, and after the recording, the recording is copied and checked to see whether the recording contains certain types of information. For example, when a merchant communicates with a user by telephone or communicates face to face, the telephone content or the face to face conversation content is often recorded, and a verifier determines whether the recorded content contains certain types of information, determines that the recorded content is complete when the recorded content contains certain types of information, and determines that the recorded content is incomplete when the recorded content does not contain certain types of information.
In the prior art, verification personnel are often required to comb out keywords in the verification method for the voice information, and then the verification of the voice information is realized in a keyword matching mode, but the keyword combing process is quite labor-consuming and low in efficiency, and because the keywords cannot be exhausted completely, the verification effect is possibly not accurate enough if the related keywords are lacked.
Disclosure of Invention
The invention provides a voice information verification method, a voice information verification device, electronic equipment and a computer readable storage medium, and mainly aims to quickly and accurately verify the content contained in voice.
In order to achieve the above object, a method for verifying voice information provided by the present invention includes:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value set and a preset real probability value, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed;
summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
Optionally, the performing text enhancement processing on the labeled data set in combination with a preset dictionary to obtain a training data set includes:
screening according to the length of the text in the labeled data set to obtain a screened data set;
and combining a preset dictionary to perform text enhancement on the screening data set to obtain a training data set.
Optionally, the performing a screening process according to the length of the text in the labeled data set to obtain a screened data set includes:
eliminating texts with the length of texts in the labeled data set smaller than a first text threshold;
reserving texts with the text length larger than the first text threshold value and smaller than a second text threshold value in the labeling data set, wherein the second text threshold value is larger than the first text threshold value;
and if the text with the text length larger than the second text threshold value exists in the labeled data set, directly intercepting a plurality of characters in the labeled data set as a screening data set according to the sequence from front to back.
Optionally, the text enhancement is performed on the filtered data set in combination with a preset dictionary to obtain a training data set, including:
randomly extracting a plurality of target words from the screening dataset;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
Optionally, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
carrying out feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-link layer in the preset text detection model for probability calculation to obtain a classification probability value set.
Optionally, the encoding the training data set to obtain a training vector set includes:
performing word segmentation processing on the training data set to obtain a word segmentation data set;
performing word-stop-removal processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
Optionally, the calculating a cross entropy loss value between the classification probability value in the classification probability value set and a preset true probability value includes:
calculating the cross entropy loss value by using the following calculation formula:
wherein, L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of the training data in the training data set.
In order to solve the above problem, the present invention further provides a voice information verification apparatus, including:
the data processing module is used for acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set; labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
the model training module is used for calculating a cross entropy loss value between the classification probability value set and a preset real probability value, adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, and obtaining a standard text detection model;
the prediction result list generation module is used for acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed; summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and the data checking module is used for extracting the categories corresponding to the prediction probability values in the prediction result list, and when the number of the categories is greater than or equal to a preset number threshold, judging that the voice data to be processed passes the checking, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to cause the at least one processor to perform the voice information verification method described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the above voice information verification method.
The embodiment of the invention performs text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set, increases the robustness of the training data, thereby improving the robustness and accuracy of a standard text detection model obtained through training data, acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, the prediction probability value set comprises prediction probability values of the voice data to be processed corresponding to different preset categories, the standard text detection model is utilized again to determine the category of the content contained in the voice data to be processed, and then the verification is carried out according to the result obtained by the standard text detection model, the standard text detection model can output results quickly and accurately, so that the verification results can be obtained quickly and accurately. Therefore, the voice information verification method, the voice information verification device and the computer readable storage medium provided by the invention can quickly and accurately verify the content contained in the voice.
Drawings
Fig. 1 is a schematic flow chart of a voice information verification method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a voice information verification apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device implementing a voice information verification method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a voice information verification method, and an execution main body of the voice information verification method comprises but is not limited to at least one of electronic equipment, such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the application. In other words, the voice information verification method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a voice information verification method according to an embodiment of the present invention. In this embodiment, the voice information verification method includes:
s1, obtaining an original voice data set, and carrying out voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set.
In an embodiment of the present invention, the original voice data set includes a plurality of pieces of voice information within a period of time, for example, the original voice data set includes a plurality of pieces of telephone recording of a credit card approver making a telephone call to a customer to verify information.
Specifically, voice recognition is carried out on the recording data by utilizing a preset voice recognition model, and the voice data in the original voice data set are converted into text data, so that subsequent evaluation processing is facilitated, and the method is more visual and convenient.
Wherein the speech recognition model may be a CRNN (Convolutional Recurrent Neural Network).
And S2, labeling the type of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set.
In the embodiment of the invention, the text data set can be labeled by calling an interface of an automatic labeling tool, and the text data in the text data set is labeled into the corresponding category to obtain the labeled data set.
For example, a certain text data included in the text data set is as follows:
"approver: ' do you ask for XX men? '
Customer: ' is. '
The approver: ' do you currently have an unhappy status? '
Customer: ' is. '
The approver: ' do your home on the x-way, xx cells? '
Customer: ' paired. '"
Labeling the text data set according to 8 preset categories, wherein the 8 categories are Email, marital conditions, academic records, relatives, mobile phone numbers, unit names, unit addresses, unit telephones and living addresses in sequence. The labeling data set obtained after the labeling processing is as follows: marital status: unmarried, residential address: x-way, xx cells.
Specifically, the performing text enhancement processing on the labeled data set by combining with a preset dictionary to obtain a training data set includes:
screening according to the length of the text in the labeled data set to obtain a screened data set;
and combining a preset dictionary to perform text enhancement on the screening data set to obtain a training data set.
Further, the screening according to the length of the text in the labeled data set to obtain a screened data set includes:
eliminating texts with the length of texts in the labeled data set smaller than a first text threshold;
reserving texts with the text length larger than the first text threshold value and smaller than a second text threshold value in the labeling data set, wherein the second text threshold value is larger than the first text threshold value;
and if the text with the text length larger than the second text threshold value exists in the labeled data set, directly intercepting a plurality of characters in the labeled data set as a screening data set according to the sequence from front to back.
Preferably, in the embodiment of the present invention, the first text threshold may be preset, for example, the first text threshold is 5, and the second text threshold may also be preset, for example, the second text threshold is 512.
Further, the text enhancement is performed on the screening data set by combining with a preset dictionary to obtain a training data set, including:
randomly extracting a plurality of target words from the screening dataset;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
The synonyms of the target words can be amplified into the screening data set by performing text enhancement processing, so that the number of the training data sets is increased, and the accuracy of model training is improved.
And S3, inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to the training data in the training data set.
In an optional embodiment of the invention, the preset bert-base-Chinese model is pre-trained through related data acquired in the credit card approval field to obtain the text detection model.
Specifically, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
carrying out feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-link layer in the preset text detection model for probability calculation to obtain a classification probability value set.
Further, the encoding the training data set to obtain a training vector set includes:
performing word segmentation processing on the training data set to obtain a word segmentation data set;
performing word-stop-removal processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
S4, calculating a cross entropy loss value between the classification probability value set and a preset real probability value, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model.
In the embodiment of the present invention, the calculating a cross entropy loss value between the classification probability value set and a preset true probability value includes:
calculating the cross entropy loss value by using the following calculation formula:
wherein, L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of the training data in the training data set.
Specifically, adjusting the internal parameters of the text detection model according to the cross entropy loss value includes: if the cross entropy loss value is larger than or equal to a preset loss threshold value, adjusting internal parameters of the text detection model, inputting the training data set into the adjusted text detection model to obtain a classification probability value, recalculating the cross entropy loss value until the cross entropy loss value is smaller than the preset loss threshold value, and obtaining a standard text detection model.
Preferably, the internal parameter may be a gradient of the model, a weight of the model.
S5, obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed.
In an embodiment of the present invention, the prediction probability value set includes prediction probability values of different categories corresponding to the to-be-processed speech data, for example, the to-be-processed speech data includes a sentence a, a sentence B, and a sentence C, and the preset categories include a category a, a category B, and a category C, and then the prediction probability value set includes prediction probability values of the sentence a belonging to the category a, the category B, and the category C, and prediction probability values of the sentence C belonging to the category a, the category B, and the category C.
And S6, summarizing the prediction probability values which are larger than a preset probability threshold value in the prediction probability value set to obtain a prediction result list.
In the embodiment of the invention, the prediction probability value set is composed of a plurality of prediction probability values, when the prediction probability value is greater than the preset probability threshold, the prediction probability value is reserved, the whole prediction probability value set is traversed, and the reserved prediction probability values are summarized.
For example, if the preset probability threshold is 90%, the screened predicted probability value is the category with the highest probability corresponding to each sentence in the speech data to be processed.
For example, if the prediction probability value set includes prediction probability values of 30%, 50% and 95% of a sentence a belonging to the category a, B and C, prediction probability values of 45%, 25% and 96% of a sentence B belonging to the category a, B and C, prediction probability values of 35%, 60% and 91% of a sentence C belonging to the category a, B and C, the prediction value of a sentence a belonging to the category C, the probability value of B sentence belonging to the category C and the probability value of C sentence belonging to the category C are retained, that is, the prediction result list includes { 95%, 96%, 91% }.
For another example, if the prediction probability value set includes prediction probability values 95%, 50% and 30% of a sentence a belonging to the category a, the category B and the category C, prediction probability values 45%, 96% and 25% of a sentence B belonging to the category a, and prediction probability values 35%, 60% and 91% of a sentence C belonging to the category a, the category B and the category C, then the prediction value of the sentence a belonging to the category a, the probability value of the sentence B belonging to the category B and the probability value of the sentence C belonging to the category C are retained, that is, the prediction result list includes { 95%, 96%, 91% }.
And S7, extracting categories corresponding to the prediction probability values in the prediction result list, and when the number of the categories is larger than or equal to a preset number threshold, judging that the voice data to be processed passes the verification, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
In the embodiment of the invention, when the number of the categories is different from the preset number threshold, the sentence in the voice data to be processed lacks one or more categories of sentences, and the voice data to be verified is determined not to pass the verification.
For example, if the different preset categories include a category a, a category b and a category c, the total number of categories of the different preset categories is 3; when the class corresponding to the prediction probability value extracted from the prediction result list only comprises the class c, determining that the to-be-processed voice data check is not passed; and when the categories corresponding to the prediction probability values extracted from the prediction result list comprise the category a, the category b and the category c, determining that the number of the categories is 3, and determining that the voice data to be processed passes the verification.
The embodiment of the invention performs text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set, increases the robustness of the training data, thereby improving the robustness and accuracy of a standard text detection model obtained through training data, acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, the prediction probability value set comprises prediction probability values of the voice data to be processed corresponding to different preset categories, the standard text detection model is utilized again to determine the category of the content contained in the voice data to be processed, and then the verification is carried out according to the result obtained by the standard text detection model, the standard text detection model can output results quickly and accurately, so that the verification results can be obtained quickly and accurately. Therefore, the voice information verification method provided by the invention can quickly and accurately verify the content contained in the voice.
Fig. 2 is a schematic block diagram of a voice information verification apparatus according to an embodiment of the present invention.
The voice information verification apparatus 100 according to the present invention may be installed in an electronic device. According to the implemented functions, the voice information verification apparatus 100 may include a data processing module 101, a model training module 102, a prediction result list generation module 103, and a data verification module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the data processing module 101 is configured to obtain an original voice data set, perform voice recognition on the original voice data set by using a preset voice recognition model, and obtain a text data set; labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
the model training module 102 is configured to calculate a cross entropy loss value between the classification probability value set and a preset true probability value, adjust an internal parameter of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, and obtain a standard text detection model;
the prediction result list generating module 103 is configured to acquire to-be-processed voice data, input the to-be-processed voice data to the standard text detection model, and perform probability calculation to obtain a prediction probability value set, where the prediction probability value set includes prediction probability values of different preset categories corresponding to the to-be-processed voice data; summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
the data checking module 104 is configured to extract categories corresponding to the prediction probability values in the prediction result list, and when the number of the categories is greater than or equal to a preset number threshold, determine that the voice data to be processed passes the check, where the preset number threshold is equal to the total number of the categories in different preset categories.
In detail, the specific implementation of each module of the voice information verification apparatus 100 is as follows:
the data processing module 101 is configured to obtain an original voice data set, and perform voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set.
In an embodiment of the present invention, the original voice data set includes a plurality of pieces of voice information within a period of time, for example, the original voice data set includes a plurality of pieces of telephone recording of a credit card approver making a telephone call to a customer to verify information.
Specifically, voice recognition is carried out on the recording data by utilizing a preset voice recognition model, and the voice data in the original voice data set are converted into text data, so that subsequent evaluation processing is facilitated, and the method is more visual and convenient.
Wherein the speech recognition model may be a CRNN (Convolutional Recurrent Neural Network).
The data processing module 101 is configured to label categories of text data in the text data set to obtain a labeled data set, and perform text enhancement processing on the labeled data set in combination with a preset dictionary to obtain a training data set.
In the embodiment of the invention, the text data set can be labeled by calling an interface of an automatic labeling tool, and the text data in the text data set is labeled into the corresponding category to obtain the labeled data set.
For example, a certain text data included in the text data set is as follows:
"approver: ' do you ask for XX men? '
Customer: ' is. '
The approver: ' do you currently have an unhappy status? '
Customer: ' is. '
The approver: ' do your home on the x-way, xx cells? '
Customer: ' paired. '"
Labeling the text data set according to 8 preset categories, wherein the 8 categories are Email, marital conditions, academic records, relatives, mobile phone numbers, unit names, unit addresses, unit telephones and living addresses in sequence. The labeling data set obtained after the labeling processing is as follows: marital status: unmarried, residential address: x-way, xx cells.
Specifically, the performing text enhancement processing on the labeled data set by combining with a preset dictionary to obtain a training data set includes:
screening according to the length of the text in the labeled data set to obtain a screened data set;
and combining a preset dictionary to perform text enhancement on the screening data set to obtain a training data set.
Further, the screening according to the length of the text in the labeled data set to obtain a screened data set includes:
eliminating texts with the length of texts in the labeled data set smaller than a first text threshold;
reserving texts with the text length larger than the first text threshold value and smaller than a second text threshold value in the labeling data set, wherein the second text threshold value is larger than the first text threshold value;
and if the text with the text length larger than the second text threshold value exists in the labeled data set, directly intercepting a plurality of characters in the labeled data set as a screening data set according to the sequence from front to back.
Preferably, in the embodiment of the present invention, the first text threshold may be preset, for example, the first text threshold is 5, and the second text threshold may also be preset, for example, the second text threshold is 512.
Further, the text enhancement is performed on the screening data set by combining with a preset dictionary to obtain a training data set, including:
randomly extracting a plurality of target words from the screening dataset;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
The synonyms of the target words can be amplified into the screening data set by performing text enhancement processing, so that the number of the training data sets is increased, and the accuracy of model training is improved.
The data processing module 101 is further configured to input the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, where the classification probability value set includes probability values of different categories corresponding to the training data in the training data set.
In the embodiment of the invention, the preset bert-base-Chinese model is pre-trained through the relevant data acquired in the credit card approval field, so that the text detection model is obtained.
Specifically, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
carrying out feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-link layer in the preset text detection model for probability calculation to obtain a classification probability value set.
Further, the encoding the training data set to obtain a training vector set includes:
performing word segmentation processing on the training data set to obtain a word segmentation data set;
performing word-stop-removal processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
The model training module 102 is configured to calculate a cross entropy loss value between the classification probability value set and a preset true probability value, and adjust internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, so as to obtain a standard text detection model.
In the embodiment of the present invention, the calculating a cross entropy loss value between the classification probability value set and a preset true probability value includes:
calculating the cross entropy loss value by using the following calculation formula:
wherein, L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of the training data in the training data set.
Specifically, adjusting the internal parameters of the text detection model according to the cross entropy loss value includes: if the cross entropy loss value is larger than or equal to a preset loss threshold value, adjusting internal parameters of the text detection model, inputting the training data set into the adjusted text detection model to obtain a classification probability value, recalculating the cross entropy loss value until the cross entropy loss value is smaller than the preset loss threshold value, and obtaining a standard text detection model.
Preferably, the internal parameter may be a gradient of the model, a weight of the model.
The prediction result list generating module 103 is configured to acquire to-be-processed voice data, input the to-be-processed voice data to the standard text detection model, and perform probability calculation to obtain a prediction probability value set, where the prediction probability value set includes prediction probability values of different preset categories corresponding to the to-be-processed voice data.
In an embodiment of the present invention, the prediction probability value set includes prediction probability values of different categories corresponding to the to-be-processed speech data, for example, the to-be-processed speech data includes a sentence a, a sentence B, and a sentence C, and the preset categories include a category a, a category B, and a category C, and then the prediction probability value set includes prediction probability values of the sentence a belonging to the category a, the category B, and the category C, and prediction probability values of the sentence C belonging to the category a, the category B, and the category C.
The prediction result list generating module 103 is configured to summarize the prediction probability values that are greater than a preset probability threshold in the prediction probability value set, so as to obtain a prediction result list.
In the embodiment of the invention, the prediction probability value set is composed of a plurality of prediction probability values, when the prediction probability value is greater than the preset probability threshold, the prediction probability value is reserved, the whole prediction probability value set is traversed, and the reserved prediction probability values are summarized.
For example, if the preset probability threshold is 90%, the screened predicted probability value is the category with the highest probability corresponding to each sentence in the speech data to be processed.
For example, if the prediction probability value set includes prediction probability values of 30%, 50% and 95% of a sentence a belonging to the category a, B and C, prediction probability values of 45%, 25% and 96% of a sentence B belonging to the category a, B and C, prediction probability values of 35%, 60% and 91% of a sentence C belonging to the category a, B and C, the prediction value of a sentence a belonging to the category C, the probability value of B sentence belonging to the category C and the probability value of C sentence belonging to the category C are retained, that is, the prediction result list includes { 95%, 96%, 91% }.
For another example, if the prediction probability value set includes prediction probability values 95%, 50% and 30% of a sentence a belonging to the category a, the category B and the category C, prediction probability values 45%, 96% and 25% of a sentence B belonging to the category a, and prediction probability values 35%, 60% and 91% of a sentence C belonging to the category a, the category B and the category C, then the prediction value of the sentence a belonging to the category a, the probability value of the sentence B belonging to the category B and the probability value of the sentence C belonging to the category C are retained, that is, the prediction result list includes { 95%, 96%, 91% }.
The data checking module 104 is configured to extract categories corresponding to the prediction probability values in the prediction result list, and when the number of the categories is greater than or equal to a preset number threshold, determine that the voice data to be processed passes the check, where the preset number threshold is equal to the total number of the categories in different preset categories.
In the embodiment of the invention, when the number of the categories is different from the preset number threshold, the sentence in the voice data to be processed lacks one or more categories of sentences, and the voice data to be verified is determined not to pass the verification.
For example, if the different preset categories include a category a, a category b and a category c, the total number of categories of the different preset categories is 3; when the class corresponding to the prediction probability value extracted from the prediction result list only comprises the class c, determining that the to-be-processed voice data check is not passed; and when the categories corresponding to the prediction probability values extracted from the prediction result list comprise the category a, the category b and the category c, determining that the number of the categories is 3, and determining that the voice data to be processed passes the verification.
The embodiment of the invention performs text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set, increases the robustness of the training data, thereby improving the robustness and accuracy of a standard text detection model obtained through training data, acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, the prediction probability value set comprises prediction probability values of the voice data to be processed corresponding to different preset categories, the standard text detection model is utilized again to determine the category of the content contained in the voice data to be processed, and then the verification is carried out according to the result obtained by the standard text detection model, the standard text detection model can output results quickly and accurately, so that the verification results can be obtained quickly and accurately. Therefore, the voice information verification device provided by the invention can quickly and accurately verify the content contained in the voice.
Fig. 3 is a schematic structural diagram of an electronic device implementing the voice information verification method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a voice information verification program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of the voice information verification program 12, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., executing a voice information verification program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The voice information verification program 12 stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can realize:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value set and a preset real probability value, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed;
summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable storage medium may be volatile or non-volatile, and may include, for example: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, which stores a computer program that, when executed by a processor of an electronic device, can implement:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value set and a preset real probability value, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed;
summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any accompanying claims should not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A voice information verification method, characterized in that the method comprises:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value set and a preset real probability value, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed;
summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
2. The method for verifying speech information according to claim 1, wherein said performing text enhancement processing on said labeled data set in combination with a preset dictionary to obtain a training data set comprises:
screening according to the length of the text in the labeled data set to obtain a screened data set;
and combining a preset dictionary to perform text enhancement on the screening data set to obtain a training data set.
3. The method for verifying speech information according to claim 2, wherein said performing a filtering process according to the length of the text in the labeled data set to obtain a filtered data set comprises:
eliminating texts with the length of texts in the labeled data set smaller than a first text threshold;
reserving texts with the text length larger than the first text threshold value and smaller than a second text threshold value in the labeling data set, wherein the second text threshold value is larger than the first text threshold value;
and if the text with the text length larger than the second text threshold value exists in the labeled data set, directly intercepting a plurality of characters in the labeled data set as a screening data set according to the sequence from front to back.
4. The method for verifying speech information according to claim 2, wherein said text enhancement of said filtered data set in combination with a predetermined dictionary to obtain a training data set comprises:
randomly extracting a plurality of target words from the screening dataset;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
5. The method for verifying speech information according to any of claims 1 to 4, wherein said inputting the training data set into a predetermined text detection model to obtain a classification probability value set comprises:
coding the training data set to obtain a training vector set;
carrying out feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-link layer in the preset text detection model for probability calculation to obtain a classification probability value set.
6. The method for checking speech information according to any one of claims 1 to 4, wherein said encoding the training data set to obtain a training vector set comprises:
performing word segmentation processing on the training data set to obtain a word segmentation data set;
performing word-stop-removal processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
7. The method for verifying speech information according to claim 1, wherein said calculating a cross-entropy loss value between a classification probability value in the classification probability value set and a preset true probability value comprises:
calculating the cross entropy loss value by using the following calculation formula:
wherein, L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of the training data in the training data set.
8. A voice information verification apparatus, comprising:
the data processing module is used for acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set; labeling the category of the text data in the text data set to obtain a labeled data set, and performing text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different classes corresponding to training data in the training data set;
the model training module is used for calculating a cross entropy loss value between the classification probability value set and a preset real probability value, adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, and obtaining a standard text detection model;
the prediction result list generation module is used for acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed; summarizing the prediction probability values which are greater than a preset probability threshold in the prediction probability value set to obtain a prediction result list;
and the data checking module is used for extracting the categories corresponding to the prediction probability values in the prediction result list, and when the number of the categories is greater than or equal to a preset number threshold, judging that the voice data to be processed passes the checking, wherein the preset number threshold is equal to the total number of the categories of different preset categories.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform a method of verifying speech information according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out a method for verifying speech information according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508407.7A CN112560453B (en) | 2020-12-18 | 2020-12-18 | Voice information verification method and device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508407.7A CN112560453B (en) | 2020-12-18 | 2020-12-18 | Voice information verification method and device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560453A true CN112560453A (en) | 2021-03-26 |
CN112560453B CN112560453B (en) | 2023-07-14 |
Family
ID=75031801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508407.7A Active CN112560453B (en) | 2020-12-18 | 2020-12-18 | Voice information verification method and device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560453B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966509A (en) * | 2021-04-16 | 2021-06-15 | 重庆度小满优扬科技有限公司 | Text quality evaluation method and device, storage medium and computer equipment |
CN113257227A (en) * | 2021-04-25 | 2021-08-13 | 平安科技(深圳)有限公司 | Method, device and equipment for detecting performance of voice recognition model and storage medium |
CN113590786A (en) * | 2021-07-28 | 2021-11-02 | 平安科技(深圳)有限公司 | Data prediction method, device, equipment and storage medium |
CN113609851A (en) * | 2021-07-09 | 2021-11-05 | 浙江连信科技有限公司 | Psychological idea cognitive deviation identification method and device and electronic equipment |
CN113744724A (en) * | 2021-08-30 | 2021-12-03 | 苏州浪潮智能科技有限公司 | Voice conversion method, device, equipment and storage medium |
CN114399766A (en) * | 2022-01-18 | 2022-04-26 | 平安科技(深圳)有限公司 | Optical character recognition model training method, device, equipment and medium |
CN115082736A (en) * | 2022-06-23 | 2022-09-20 | 平安普惠企业管理有限公司 | Garbage identification and classification method and device, electronic equipment and storage medium |
CN116992033A (en) * | 2023-09-25 | 2023-11-03 | 北京中关村科金技术有限公司 | Text classification threshold determining method, text classification method and related device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767759A (en) * | 2019-02-14 | 2019-05-17 | 重庆邮电大学 | End-to-end speech recognition methods based on modified CLDNN structure |
CN110021308A (en) * | 2019-05-16 | 2019-07-16 | 北京百度网讯科技有限公司 | Voice mood recognition methods, device, computer equipment and storage medium |
CN110223675A (en) * | 2019-06-13 | 2019-09-10 | 苏州思必驰信息科技有限公司 | The screening technique and system of training text data for speech recognition |
CN110503952A (en) * | 2019-07-29 | 2019-11-26 | 北京搜狗科技发展有限公司 | A kind of method of speech processing, device and electronic equipment |
CN110895935A (en) * | 2018-09-13 | 2020-03-20 | 阿里巴巴集团控股有限公司 | Speech recognition method, system, device and medium |
-
2020
- 2020-12-18 CN CN202011508407.7A patent/CN112560453B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110895935A (en) * | 2018-09-13 | 2020-03-20 | 阿里巴巴集团控股有限公司 | Speech recognition method, system, device and medium |
CN109767759A (en) * | 2019-02-14 | 2019-05-17 | 重庆邮电大学 | End-to-end speech recognition methods based on modified CLDNN structure |
CN110021308A (en) * | 2019-05-16 | 2019-07-16 | 北京百度网讯科技有限公司 | Voice mood recognition methods, device, computer equipment and storage medium |
CN110223675A (en) * | 2019-06-13 | 2019-09-10 | 苏州思必驰信息科技有限公司 | The screening technique and system of training text data for speech recognition |
CN110503952A (en) * | 2019-07-29 | 2019-11-26 | 北京搜狗科技发展有限公司 | A kind of method of speech processing, device and electronic equipment |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966509A (en) * | 2021-04-16 | 2021-06-15 | 重庆度小满优扬科技有限公司 | Text quality evaluation method and device, storage medium and computer equipment |
CN113257227A (en) * | 2021-04-25 | 2021-08-13 | 平安科技(深圳)有限公司 | Method, device and equipment for detecting performance of voice recognition model and storage medium |
CN113257227B (en) * | 2021-04-25 | 2024-03-01 | 平安科技(深圳)有限公司 | Speech recognition model performance detection method, device, equipment and storage medium |
CN113609851A (en) * | 2021-07-09 | 2021-11-05 | 浙江连信科技有限公司 | Psychological idea cognitive deviation identification method and device and electronic equipment |
CN113590786A (en) * | 2021-07-28 | 2021-11-02 | 平安科技(深圳)有限公司 | Data prediction method, device, equipment and storage medium |
CN113744724A (en) * | 2021-08-30 | 2021-12-03 | 苏州浪潮智能科技有限公司 | Voice conversion method, device, equipment and storage medium |
CN114399766A (en) * | 2022-01-18 | 2022-04-26 | 平安科技(深圳)有限公司 | Optical character recognition model training method, device, equipment and medium |
CN114399766B (en) * | 2022-01-18 | 2024-05-10 | 平安科技(深圳)有限公司 | Optical character recognition model training method, device, equipment and medium |
CN115082736A (en) * | 2022-06-23 | 2022-09-20 | 平安普惠企业管理有限公司 | Garbage identification and classification method and device, electronic equipment and storage medium |
CN116992033A (en) * | 2023-09-25 | 2023-11-03 | 北京中关村科金技术有限公司 | Text classification threshold determining method, text classification method and related device |
CN116992033B (en) * | 2023-09-25 | 2023-12-08 | 北京中关村科金技术有限公司 | Text classification threshold determining method, text classification method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN112560453B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112560453B (en) | Voice information verification method and device, electronic equipment and medium | |
CN112597312A (en) | Text classification method and device, electronic equipment and readable storage medium | |
CN112541338A (en) | Similar text matching method and device, electronic equipment and computer storage medium | |
CN112883190A (en) | Text classification method and device, electronic equipment and storage medium | |
CN112988963A (en) | User intention prediction method, device, equipment and medium based on multi-process node | |
CN113157927A (en) | Text classification method and device, electronic equipment and readable storage medium | |
CN111651585A (en) | Information verification method and device, electronic equipment and storage medium | |
CN113064994A (en) | Conference quality evaluation method, device, equipment and storage medium | |
CN112951233A (en) | Voice question and answer method and device, electronic equipment and readable storage medium | |
CN114840684A (en) | Map construction method, device and equipment based on medical entity and storage medium | |
CN113360654B (en) | Text classification method, apparatus, electronic device and readable storage medium | |
CN113434542B (en) | Data relationship identification method and device, electronic equipment and storage medium | |
CN112992187B (en) | Context-based voice emotion detection method, device, equipment and storage medium | |
CN113205814B (en) | Voice data labeling method and device, electronic equipment and storage medium | |
CN113627160A (en) | Text error correction method and device, electronic equipment and storage medium | |
CN113628043A (en) | Complaint validity judgment method, device, equipment and medium based on data classification | |
CN113344125A (en) | Long text matching identification method and device, electronic equipment and storage medium | |
CN112579781A (en) | Text classification method and device, electronic equipment and medium | |
CN114780688A (en) | Text quality inspection method, device and equipment based on rule matching and storage medium | |
CN114186028A (en) | Consult complaint work order processing method, device, equipment and storage medium | |
CN113902404A (en) | Employee promotion analysis method, device, equipment and medium based on artificial intelligence | |
CN112712797A (en) | Voice recognition method and device, electronic equipment and readable storage medium | |
CN113808616A (en) | Voice compliance detection method, device, equipment and storage medium | |
CN113157677A (en) | Data filtering method and device based on trust behaviors | |
CN113486238A (en) | Information pushing method, device and equipment based on user portrait and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |