CN112560453B - Voice information verification method and device, electronic equipment and medium - Google Patents

Voice information verification method and device, electronic equipment and medium Download PDF

Info

Publication number
CN112560453B
CN112560453B CN202011508407.7A CN202011508407A CN112560453B CN 112560453 B CN112560453 B CN 112560453B CN 202011508407 A CN202011508407 A CN 202011508407A CN 112560453 B CN112560453 B CN 112560453B
Authority
CN
China
Prior art keywords
data set
text
preset
voice
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011508407.7A
Other languages
Chinese (zh)
Other versions
CN112560453A (en
Inventor
李骁
赖众程
王亮
高洪喜
海洋
罗霞
张凌旖
李林毅
李会璟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202011508407.7A priority Critical patent/CN112560453B/en
Publication of CN112560453A publication Critical patent/CN112560453A/en
Application granted granted Critical
Publication of CN112560453B publication Critical patent/CN112560453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/247Thesauruses; Synonyms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a voice processing technology, and discloses a voice information verification method, which comprises the following steps: performing voice recognition, labeling data, text enhancement and probability calculation on an original voice data set to obtain a classification probability value, calculating a cross entropy loss value with a real probability value, and obtaining a standard text detection model when the cross entropy loss value is smaller than a loss threshold value; and carrying out probability calculation on the voice data to be processed to obtain a predicted probability value set, summarizing the predicted probability values larger than the probability threshold value to obtain a predicted result list, extracting categories corresponding to the predicted probability values, and judging that the voice data to be processed passes the verification when the number of the categories is larger than or equal to the preset number threshold value. The present invention also relates to blockchain techniques, the training data sets, etc., may be stored in blockchain nodes. The invention also discloses a voice information verification device, electronic equipment and a storage medium. The invention can quickly and accurately verify the content contained in the voice.

Description

Voice information verification method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of speech processing technologies, and in particular, to a method and apparatus for verifying speech information, an electronic device, and a computer readable storage medium.
Background
In some cases, recording is performed in life and work, and after recording, playback and checking are performed on whether the recording contains some kinds of information. For example, when a merchant communicates with a user telephone or in face-to-face communication, telephone content or face-to-face conversation content is often recorded, and a verifier determines whether the recorded content contains certain categories of information, determines that the recorded content is complete when the recorded content contains certain categories of information, and determines that the recorded content is incomplete when the recorded content does not contain certain categories of information.
In the prior art, verification methods for such voice information often require verification personnel to comb out keywords, and verification of the voice information is achieved through keyword matching, but the keyword combing process is quite labor-consuming and low in efficiency, and because keywords cannot always be exhausted completely, if related keywords are lacked, verification effects may be inaccurate.
Disclosure of Invention
The invention provides a voice information verification method, a device, electronic equipment and a computer readable storage medium, and mainly aims to quickly and accurately verify content contained in voice.
In order to achieve the above object, the present invention provides a voice information verification method, including:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value and a preset real probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a predicted probability value set, wherein the predicted probability value set comprises predicted probability values of different preset categories corresponding to the voice data to be processed;
Summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes the verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
Optionally, the text enhancement processing is performed on the labeling data set by combining with a preset dictionary to obtain a training data set, which includes:
screening according to the length of the text in the marked data set to obtain a screened data set;
and carrying out text enhancement on the screening data set by combining with a preset dictionary to obtain a training data set.
Optionally, the filtering processing is performed according to the length of the text in the labeling dataset to obtain a filtering dataset, including:
removing the text with the text length smaller than the first text threshold value in the marked data set;
reserving texts with text lengths greater than the first text threshold and less than a second text threshold in the annotation data set, wherein the second text threshold is greater than the first text threshold;
If the text with the text length larger than the second text threshold exists in the marked data set, a plurality of characters in the marked data set are directly intercepted according to the sequence from front to back to be used as a screening data set.
Optionally, the text enhancement is performed on the screening dataset by combining with a preset dictionary to obtain a training dataset, which includes:
randomly extracting a plurality of target words from the screening data set;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
Optionally, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
performing feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-connection layer in the preset text detection model to perform probability calculation, so as to obtain a classification probability value set.
Optionally, the encoding the training data set to obtain a training vector set includes:
word segmentation processing is carried out on the training data set to obtain a word segmentation data set;
Performing de-stop word processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
Optionally, the calculating a cross entropy loss value between the classification probability value in the classification probability value set and a preset true probability value includes:
the cross entropy loss value is calculated using the following calculation formula:
Figure BDA0002845592710000031
wherein L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of training data in the training data set.
In order to solve the above problems, the present invention also provides a voice information verification apparatus, the apparatus comprising:
the data processing module is used for acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set; labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
The model training module is used for calculating a cross entropy loss value between the classification probability value and a preset true probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
the prediction result list generation module is used for acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed; summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and the data verification module is used for extracting the categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
At least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the voice information verification method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-mentioned voice information verification method.
According to the embodiment of the invention, the text enhancement processing is carried out on the labeling data set by combining a preset dictionary to obtain a training data set, the robustness of the training data is increased, the robustness and the accuracy of a standard text detection model obtained through the training data are improved, the voice data to be processed are obtained, the voice data to be processed are input into the standard text detection model to carry out probability calculation, a prediction probability value set is obtained, the prediction probability value set comprises the prediction probability values of different preset categories corresponding to the voice data to be processed, the category of the content contained in the voice data to be processed is determined by utilizing the standard text detection model again, and verification is carried out according to the result obtained by the standard text detection model, and as the standard text detection model can rapidly and accurately output the result, the verification result can be rapidly and accurately obtained. Therefore, the voice information verification method, the voice information verification device and the computer readable storage medium can quickly and accurately verify the content contained in the voice.
Drawings
Fig. 1 is a flow chart of a voice information verification method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a voice information verification device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device for implementing a voice information verification method according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention provides a voice information verification method, and an execution subject of the voice information verification method comprises, but is not limited to, at least one of a server, a terminal and the like which can be configured to execute the method provided by the embodiment of the application. In other words, the voice information verification method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a voice information verification method according to an embodiment of the present invention is shown. In this embodiment, the voice information verification method includes:
s1, acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set.
In the embodiment of the invention, the original voice data set comprises a plurality of voice messages within a period of time, for example, the original voice data set comprises telephone records of a plurality of sections of credit card approvers for making a call to a customer to verify information.
Specifically, voice recognition is performed on the recording data by using a preset voice recognition model, voice data in the original voice data set are converted into text data, subsequent evaluation processing is facilitated, and intuitionistic and convenient effects are achieved.
Wherein the speech recognition model may be a CRNN (Convolutional Recurrent Neural Network, convolutional recurrent neural network structure).
S2, marking the types of the text data in the text data set to obtain a marked data set, and performing text enhancement processing on the marked data set by combining a preset dictionary to obtain a training data set.
In the embodiment of the invention, the text data in the text data set can be marked as the corresponding category by calling the interface of the automatic marking tool to carry out marking processing on the text data set, so as to obtain the marked data set.
For example, a certain text data included in the text data set is the following:
"approval person: ' do you good, please ask is XX men? '
And (3) a client: 'yes'. '
Approval personnel: is you presently in an unmarried state? '
And (3) a client: 'one' is done. '
Approval personnel: is your home in x-way, xx cell? '
And (3) a client: 'pair'. '"
Labeling the text data set according to 8 preset categories, wherein the 8 categories are Email, marital status, academy, related contacts, mobile phone numbers, unit names, unit addresses, unit phones and residence addresses in sequence. The labeling data set obtained after labeling processing is as follows: marital status: unmarried, residential address: x-way, xx cell.
Specifically, the text enhancement processing is performed on the labeling data set by combining a preset dictionary to obtain a training data set, which includes:
screening according to the length of the text in the marked data set to obtain a screened data set;
And carrying out text enhancement on the screening data set by combining with a preset dictionary to obtain a training data set.
Further, the filtering processing is performed according to the length of the text in the labeling dataset to obtain a filtering dataset, including:
removing the text with the text length smaller than the first text threshold value in the marked data set;
reserving texts with text lengths greater than the first text threshold and less than a second text threshold in the annotation data set, wherein the second text threshold is greater than the first text threshold;
if the text with the text length larger than the second text threshold exists in the marked data set, a plurality of characters in the marked data set are directly intercepted according to the sequence from front to back to be used as a screening data set.
Preferably, in the embodiment of the present invention, the first text threshold may be preset, for example, the first text threshold is 5, and the second text threshold may also be preset, for example, the second text threshold is 512.
Further, the text enhancement is performed on the screening data set by combining with a preset dictionary to obtain a training data set, which includes:
randomly extracting a plurality of target words from the screening data set;
Randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
The synonyms of the target words can be amplified into the screening data set by text enhancement processing, so that the number of the training data sets is increased, and the accuracy of model training is improved.
S3, inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set.
In an alternative embodiment of the present invention, the text detection model is obtained by pre-training a preset bert-base-Chinese model through related data acquired in the credit card approval field.
Specifically, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
performing feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-connection layer in the preset text detection model to perform probability calculation, so as to obtain a classification probability value set.
Further, the encoding processing is performed on the training data set to obtain a training vector set, including:
word segmentation processing is carried out on the training data set to obtain a word segmentation data set;
performing de-stop word processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
S4, calculating a cross entropy loss value between the classification probability value and a preset true probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, so as to obtain a standard text detection model.
In the embodiment of the present invention, the calculating the cross entropy loss value between the classification probability value in the classification probability value set and the preset true probability value includes:
the cross entropy loss value is calculated using the following calculation formula:
Figure BDA0002845592710000071
wherein L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of training data in the training data set.
Specifically, adjusting the internal parameters of the text detection model according to the cross entropy loss value includes: and if the cross entropy loss value is greater than or equal to a preset loss threshold value, adjusting internal parameters of the text detection model, inputting the training data set into the adjusted text detection model to obtain a classification probability value, and recalculating the cross entropy loss value until the cross entropy loss value is less than the preset loss threshold value to obtain a standard text detection model.
Preferably, the internal parameter may be a gradient of the model, a weight of the model.
S5, obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation, and obtaining a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed.
In the embodiment of the present invention, the set of predicted probability values includes predicted probability values of different categories corresponding to the to-be-processed voice data, for example, the to-be-processed voice data includes a sentence a, a sentence B, and a sentence C, and the preset category includes a category a, B category, and C category, and the set of predicted probability values includes predicted probability values of a sentence belonging to the category a, B category, and C category, predicted probability values of B sentence belonging to the category a, B category, and C category, and predicted probability values of C sentence belonging to the category a, B category, and C category.
And S6, summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list.
In the embodiment of the invention, the predicted probability value set is composed of a plurality of predicted probability values, when the predicted probability value is larger than the preset probability threshold, the predicted probability value is reserved, the whole predicted probability value set is traversed, and the reserved predicted probability values are summarized.
For example, if the preset probability threshold is 90%, the screened predicted probability value is the category with the highest probability corresponding to each sentence in the voice data to be processed.
For example, if the predicted probability value set includes 30%,50% and 95% of the predicted probability values of the a-sentence belonging to the a-category, the B-category and the C-category, 45%,25% and 96% of the predicted probability values of the B-sentence belonging to the a-category, the B-category and the C-category, 35%,60% and 91% of the predicted probability values of the C-sentence belonging to the a-category, the B-sentence belonging to the C-category, and the C-sentence belonging to the C-category, the predicted probability values of the C-sentence belonging to the C-category are retained, that is {95%,96%,91% } is included in the predicted result list.
For another example, if the set of predicted probability values includes 95%,50% and 30% of predicted probability values of a sentence belonging to a category, B category and C category, 45%,96% and 25% of predicted probability values of B sentence belonging to a category, B category and C category, 35%,60% and 91% of predicted probability values of C sentence belonging to a category, B category and C category, the predicted values of a sentence belonging to a category, the probability values of B sentence belonging to B category and the probability values of C sentence belonging to C category are retained, that is, the list of predicted results contains {95%,96%,91% }.
And S7, extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
In the embodiment of the invention, when the number of the categories is different from the preset number threshold, the fact that sentences of one or more categories are absent in the to-be-processed voice data is indicated, and the voice data to be checked is determined to not pass the check.
For example, if the different preset categories include a category, b category and c category, the total number of categories of the different preset categories is 3; when the category corresponding to the prediction probability value in the extracted prediction result list only comprises the category c, determining that the verification of the voice data to be processed is not passed; when the category corresponding to the prediction probability value in the extracted prediction result list comprises a category a, a category b and a category c, determining that the number of the categories is 3, and determining that the voice data to be processed passes the verification.
According to the embodiment of the invention, the text enhancement processing is carried out on the labeling data set by combining a preset dictionary to obtain a training data set, the robustness of the training data is increased, the robustness and the accuracy of a standard text detection model obtained through the training data are improved, the voice data to be processed are obtained, the voice data to be processed are input into the standard text detection model to carry out probability calculation, a prediction probability value set is obtained, the prediction probability value set comprises the prediction probability values of different preset categories corresponding to the voice data to be processed, the category of the content contained in the voice data to be processed is determined by utilizing the standard text detection model again, and verification is carried out according to the result obtained by the standard text detection model, and as the standard text detection model can rapidly and accurately output the result, the verification result can be rapidly and accurately obtained. Therefore, the voice information verification method provided by the invention can be used for rapidly and accurately verifying the content contained in the voice.
Fig. 2 is a schematic block diagram of a voice information verification device according to an embodiment of the present invention.
The voice information verification apparatus 100 of the present invention may be installed in an electronic device. The speech information verification apparatus 100 may include a data processing module 101, a model training module 102, a prediction result list generating module 103, and a data verification module 104 according to the implemented functions. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the data processing module 101 is configured to obtain an original voice data set, and perform voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set; labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
The model training module 102 is configured to calculate a cross entropy loss value between the classification probability value set and a preset true probability value, and adjust internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, so as to obtain a standard text detection model;
the prediction result list generating module 103 is configured to obtain to-be-processed voice data, input the to-be-processed voice data to the standard text detection model for probability calculation, and obtain a prediction probability value set, where the prediction probability value set includes prediction probability values of different preset categories corresponding to the to-be-processed voice data; summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
the data verification module 104 is configured to extract a class corresponding to the predicted probability value in the predicted result list, and determine that the to-be-processed voice data passes the verification when the number of the classes is greater than or equal to a preset number threshold, where the preset number threshold is equal to the total number of the classes of the different preset classes.
In detail, the specific embodiments of the modules of the voice information verification apparatus 100 are as follows:
The data processing module 101 is configured to obtain an original voice data set, and perform voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set.
In the embodiment of the invention, the original voice data set comprises a plurality of voice messages within a period of time, for example, the original voice data set comprises telephone records of a plurality of sections of credit card approvers for making a call to a customer to verify information.
Specifically, voice recognition is performed on the recording data by using a preset voice recognition model, voice data in the original voice data set are converted into text data, subsequent evaluation processing is facilitated, and intuitionistic and convenient effects are achieved.
Wherein the speech recognition model may be a CRNN (Convolutional Recurrent Neural Network, convolutional recurrent neural network structure).
The data processing module 101 is configured to label the category of the text data in the text data set, obtain a labeled data set, and perform text enhancement processing on the labeled data set in combination with a preset dictionary, to obtain a training data set.
In the embodiment of the invention, the text data in the text data set can be marked as the corresponding category by calling the interface of the automatic marking tool to carry out marking processing on the text data set, so as to obtain the marked data set.
For example, a certain text data included in the text data set is the following:
"approval person: ' do you good, please ask is XX men? '
And (3) a client: 'yes'. '
Approval personnel: is you presently in an unmarried state? '
And (3) a client: 'one' is done. '
Approval personnel: is your home in x-way, xx cell? '
And (3) a client: 'pair'. '"
Labeling the text data set according to 8 preset categories, wherein the 8 categories are Email, marital status, academy, related contacts, mobile phone numbers, unit names, unit addresses, unit phones and residence addresses in sequence. The labeling data set obtained after labeling processing is as follows: marital status: unmarried, residential address: x-way, xx cell.
Specifically, the text enhancement processing is performed on the labeling data set by combining a preset dictionary to obtain a training data set, which includes:
screening according to the length of the text in the marked data set to obtain a screened data set;
and carrying out text enhancement on the screening data set by combining with a preset dictionary to obtain a training data set.
Further, the filtering processing is performed according to the length of the text in the labeling dataset to obtain a filtering dataset, including:
Removing the text with the text length smaller than the first text threshold value in the marked data set;
reserving texts with text lengths greater than the first text threshold and less than a second text threshold in the annotation data set, wherein the second text threshold is greater than the first text threshold;
if the text with the text length larger than the second text threshold exists in the marked data set, a plurality of characters in the marked data set are directly intercepted according to the sequence from front to back to be used as a screening data set.
Preferably, in the embodiment of the present invention, the first text threshold may be preset, for example, the first text threshold is 5, and the second text threshold may also be preset, for example, the second text threshold is 512.
Further, the text enhancement is performed on the screening data set by combining with a preset dictionary to obtain a training data set, which includes:
randomly extracting a plurality of target words from the screening data set;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
The synonyms of the target words can be amplified into the screening data set by text enhancement processing, so that the number of the training data sets is increased, and the accuracy of model training is improved.
The data processing module 101 is further configured to input the training data set into a preset text detection model for probability calculation, so as to obtain a classification probability value set, where the classification probability value set includes probability values of different categories corresponding to training data in the training data set.
In the embodiment of the invention, the text detection model is obtained by pre-training a preset bert-base-Chinese model through the related data acquired in the credit card approval field.
Specifically, the inputting the training data set into a preset text detection model to obtain a classification probability value set includes:
coding the training data set to obtain a training vector set;
performing feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-connection layer in the preset text detection model to perform probability calculation, so as to obtain a classification probability value set.
Further, the encoding processing is performed on the training data set to obtain a training vector set, including:
word segmentation processing is carried out on the training data set to obtain a word segmentation data set;
performing de-stop word processing on the word segmentation data set to obtain a standard data set;
And mapping the standard data in the standard data set to obtain a training vector set.
The model training module 102 is configured to calculate a cross entropy loss value between the classification probability value set and a preset true probability value, and adjust internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value, so as to obtain a standard text detection model.
In the embodiment of the present invention, the calculating the cross entropy loss value between the classification probability value in the classification probability value set and the preset true probability value includes:
the cross entropy loss value is calculated using the following calculation formula:
Figure BDA0002845592710000121
wherein L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of training data in the training data set.
Specifically, adjusting the internal parameters of the text detection model according to the cross entropy loss value includes: and if the cross entropy loss value is greater than or equal to a preset loss threshold value, adjusting internal parameters of the text detection model, inputting the training data set into the adjusted text detection model to obtain a classification probability value, and recalculating the cross entropy loss value until the cross entropy loss value is less than the preset loss threshold value to obtain a standard text detection model.
Preferably, the internal parameter may be a gradient of the model, a weight of the model.
The prediction result list generating module 103 is configured to obtain to-be-processed voice data, input the to-be-processed voice data to the standard text detection model for probability calculation, and obtain a prediction probability value set, where the prediction probability value set includes prediction probability values of different preset categories corresponding to the to-be-processed voice data.
In the embodiment of the present invention, the set of predicted probability values includes predicted probability values of different categories corresponding to the to-be-processed voice data, for example, the to-be-processed voice data includes a sentence a, a sentence B, and a sentence C, and the preset category includes a category a, B category, and C category, and the set of predicted probability values includes predicted probability values of a sentence belonging to the category a, B category, and C category, predicted probability values of B sentence belonging to the category a, B category, and C category, and predicted probability values of C sentence belonging to the category a, B category, and C category.
The prediction result list generating module 103 is configured to aggregate the prediction probability values in the prediction probability value set that are greater than a preset probability threshold, so as to obtain a prediction result list.
In the embodiment of the invention, the predicted probability value set is composed of a plurality of predicted probability values, when the predicted probability value is larger than the preset probability threshold, the predicted probability value is reserved, the whole predicted probability value set is traversed, and the reserved predicted probability values are summarized.
For example, if the preset probability threshold is 90%, the screened predicted probability value is the category with the highest probability corresponding to each sentence in the voice data to be processed.
For example, if the predicted probability value set includes 30%,50% and 95% of the predicted probability values of the a-sentence belonging to the a-category, the B-category and the C-category, 45%,25% and 96% of the predicted probability values of the B-sentence belonging to the a-category, the B-category and the C-category, 35%,60% and 91% of the predicted probability values of the C-sentence belonging to the a-category, the B-sentence belonging to the C-category, and the C-sentence belonging to the C-category, the predicted probability values of the C-sentence belonging to the C-category are retained, that is {95%,96%,91% } is included in the predicted result list.
For another example, if the set of predicted probability values includes 95%,50% and 30% of predicted probability values of a sentence belonging to a category, B category and C category, 45%,96% and 25% of predicted probability values of B sentence belonging to a category, B category and C category, 35%,60% and 91% of predicted probability values of C sentence belonging to a category, B category and C category, the predicted values of a sentence belonging to a category, the probability values of B sentence belonging to B category and the probability values of C sentence belonging to C category are retained, that is, the list of predicted results contains {95%,96%,91% }.
The data verification module 104 is configured to extract a class corresponding to the predicted probability value in the predicted result list, and determine that the to-be-processed voice data passes the verification when the number of the classes is greater than or equal to a preset number threshold, where the preset number threshold is equal to the total number of the classes of the different preset classes.
In the embodiment of the invention, when the number of the categories is different from the preset number threshold, the fact that sentences of one or more categories are absent in the to-be-processed voice data is indicated, and the voice data to be checked is determined to not pass the check.
For example, if the different preset categories include a category, b category and c category, the total number of categories of the different preset categories is 3; when the category corresponding to the prediction probability value in the extracted prediction result list only comprises the category c, determining that the verification of the voice data to be processed is not passed; when the category corresponding to the prediction probability value in the extracted prediction result list comprises a category a, a category b and a category c, determining that the number of the categories is 3, and determining that the voice data to be processed passes the verification.
According to the embodiment of the invention, the text enhancement processing is carried out on the labeling data set by combining a preset dictionary to obtain a training data set, the robustness of the training data is increased, the robustness and the accuracy of a standard text detection model obtained through the training data are improved, the voice data to be processed are obtained, the voice data to be processed are input into the standard text detection model to carry out probability calculation, a prediction probability value set is obtained, the prediction probability value set comprises the prediction probability values of different preset categories corresponding to the voice data to be processed, the category of the content contained in the voice data to be processed is determined by utilizing the standard text detection model again, and verification is carried out according to the result obtained by the standard text detection model, and as the standard text detection model can rapidly and accurately output the result, the verification result can be rapidly and accurately obtained. Therefore, the voice information verification device provided by the invention can quickly and accurately verify the content contained in the voice.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the voice information verification method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a speech information verification program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the voice information verification program 12, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective parts of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (for example, executing a voice information verification program or the like) stored in the memory 11, and calling data stored in the memory 11.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The voice information verification program 12 stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, which when executed in the processor 10, can implement:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
Inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value and a preset real probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a predicted probability value set, wherein the predicted probability value set comprises predicted probability values of different preset categories corresponding to the voice data to be processed;
summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes the verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile, for example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
Inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value and a preset real probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a predicted probability value set, wherein the predicted probability value set comprises predicted probability values of different preset categories corresponding to the voice data to be processed;
summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes the verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any accompanying diagram representation in the claims should not be considered as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A method for verifying voice information, the method comprising:
acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set;
labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set;
inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
calculating a cross entropy loss value between the classification probability value and a preset real probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
obtaining voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a predicted probability value set, wherein the predicted probability value set comprises predicted probability values of different preset categories corresponding to the voice data to be processed;
Summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and extracting categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes the verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
2. The method for verifying voice information according to claim 1, wherein the text enhancement processing is performed on the labeling data set in combination with a preset dictionary to obtain a training data set, and the method comprises:
screening according to the length of the text in the marked data set to obtain a screened data set;
and carrying out text enhancement on the screening data set by combining with a preset dictionary to obtain a training data set.
3. The method for verifying voice information according to claim 2, wherein the filtering according to the text length in the labeling dataset to obtain a filtering dataset comprises:
removing the text with the text length smaller than the first text threshold value in the marked data set;
reserving texts with text lengths greater than the first text threshold and less than a second text threshold in the annotation data set, wherein the second text threshold is greater than the first text threshold;
If the text with the text length larger than the second text threshold exists in the marked data set, a plurality of characters in the marked data set are directly intercepted according to the sequence from front to back to be used as a screening data set.
4. The method for verifying voice information according to claim 2, wherein the text enhancement of the screening dataset by combining a preset dictionary to obtain a training dataset comprises:
randomly extracting a plurality of target words from the screening data set;
randomly extracting synonyms of the target words from a preset dictionary;
and adding the synonyms into the screening data set to obtain a training data set.
5. The method for verifying voice information according to any one of claims 1 to 4, wherein the inputting the training data set into a preset text detection model to obtain the classification probability value set includes:
coding the training data set to obtain a training vector set;
performing feature extraction processing on the training vector set to obtain a feature vector set;
and inputting the feature vector set to a full-connection layer in the preset text detection model to perform probability calculation, so as to obtain a classification probability value set.
6. The method for verifying voice information according to any one of claims 1 to 4, wherein the encoding the training data set to obtain a training vector set includes:
word segmentation processing is carried out on the training data set to obtain a word segmentation data set;
performing de-stop word processing on the word segmentation data set to obtain a standard data set;
and mapping the standard data in the standard data set to obtain a training vector set.
7. The voice information verification method of claim 1, wherein the calculating a cross entropy loss value between the classification probability value and a preset true probability value in the classification probability value set comprises:
the cross entropy loss value is calculated using the following calculation formula:
Figure FDA0002845592700000021
wherein L is a cross entropy loss value, x is training data in the training data set, y is a preset real probability value, a is a classification probability value, and n represents the total number of training data in the training data set.
8. A voice information verification apparatus, the apparatus comprising:
the data processing module is used for acquiring an original voice data set, and performing voice recognition on the original voice data set by using a preset voice recognition model to obtain a text data set; labeling the types of the text data in the text data set to obtain a labeled data set, and carrying out text enhancement processing on the labeled data set by combining a preset dictionary to obtain a training data set; inputting the training data set into a preset text detection model for probability calculation to obtain a classification probability value set, wherein the classification probability value set comprises probability values of different categories corresponding to training data in the training data set;
The model training module is used for calculating a cross entropy loss value between the classification probability value and a preset true probability value in the classification probability value set, and adjusting internal parameters of the preset text detection model according to the cross entropy loss value until the cross entropy loss value is smaller than a preset loss threshold value to obtain a standard text detection model;
the prediction result list generation module is used for acquiring voice data to be processed, inputting the voice data to be processed into the standard text detection model for probability calculation to obtain a prediction probability value set, wherein the prediction probability value set comprises prediction probability values of different preset categories corresponding to the voice data to be processed; summarizing the predicted probability values which are larger than a preset probability threshold in the predicted probability value set to obtain a predicted result list;
and the data verification module is used for extracting the categories corresponding to the prediction probability values in the prediction result list, and judging that the voice data to be processed passes verification when the number of the categories is greater than or equal to a preset number threshold value, wherein the preset number threshold value is equal to the total number of the categories of different preset categories.
9. An electronic device, the electronic device comprising:
At least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores computer program instructions executable by the at least one processor to enable the at least one processor to perform the voice information verification method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the speech information verification method according to any one of claims 1 to 7.
CN202011508407.7A 2020-12-18 2020-12-18 Voice information verification method and device, electronic equipment and medium Active CN112560453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508407.7A CN112560453B (en) 2020-12-18 2020-12-18 Voice information verification method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508407.7A CN112560453B (en) 2020-12-18 2020-12-18 Voice information verification method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112560453A CN112560453A (en) 2021-03-26
CN112560453B true CN112560453B (en) 2023-07-14

Family

ID=75031801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508407.7A Active CN112560453B (en) 2020-12-18 2020-12-18 Voice information verification method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112560453B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966509B (en) * 2021-04-16 2023-04-07 重庆度小满优扬科技有限公司 Text quality evaluation method and device, storage medium and computer equipment
CN113257227B (en) * 2021-04-25 2024-03-01 平安科技(深圳)有限公司 Speech recognition model performance detection method, device, equipment and storage medium
CN113609851A (en) * 2021-07-09 2021-11-05 浙江连信科技有限公司 Psychological idea cognitive deviation identification method and device and electronic equipment
CN113590786A (en) * 2021-07-28 2021-11-02 平安科技(深圳)有限公司 Data prediction method, device, equipment and storage medium
CN113744724A (en) * 2021-08-30 2021-12-03 苏州浪潮智能科技有限公司 Voice conversion method, device, equipment and storage medium
CN114399766B (en) * 2022-01-18 2024-05-10 平安科技(深圳)有限公司 Optical character recognition model training method, device, equipment and medium
CN115082736A (en) * 2022-06-23 2022-09-20 平安普惠企业管理有限公司 Garbage identification and classification method and device, electronic equipment and storage medium
CN116992033B (en) * 2023-09-25 2023-12-08 北京中关村科金技术有限公司 Text classification threshold determining method, text classification method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767759A (en) * 2019-02-14 2019-05-17 重庆邮电大学 End-to-end speech recognition methods based on modified CLDNN structure
CN110021308A (en) * 2019-05-16 2019-07-16 北京百度网讯科技有限公司 Voice mood recognition methods, device, computer equipment and storage medium
CN110223675A (en) * 2019-06-13 2019-09-10 苏州思必驰信息科技有限公司 The screening technique and system of training text data for speech recognition
CN110503952A (en) * 2019-07-29 2019-11-26 北京搜狗科技发展有限公司 A kind of method of speech processing, device and electronic equipment
CN110895935A (en) * 2018-09-13 2020-03-20 阿里巴巴集团控股有限公司 Speech recognition method, system, device and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895935A (en) * 2018-09-13 2020-03-20 阿里巴巴集团控股有限公司 Speech recognition method, system, device and medium
CN109767759A (en) * 2019-02-14 2019-05-17 重庆邮电大学 End-to-end speech recognition methods based on modified CLDNN structure
CN110021308A (en) * 2019-05-16 2019-07-16 北京百度网讯科技有限公司 Voice mood recognition methods, device, computer equipment and storage medium
CN110223675A (en) * 2019-06-13 2019-09-10 苏州思必驰信息科技有限公司 The screening technique and system of training text data for speech recognition
CN110503952A (en) * 2019-07-29 2019-11-26 北京搜狗科技发展有限公司 A kind of method of speech processing, device and electronic equipment

Also Published As

Publication number Publication date
CN112560453A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112560453B (en) Voice information verification method and device, electronic equipment and medium
CN112507936B (en) Image information auditing method and device, electronic equipment and readable storage medium
CN112541745B (en) User behavior data analysis method and device, electronic equipment and readable storage medium
CN112541338A (en) Similar text matching method and device, electronic equipment and computer storage medium
CN113157927B (en) Text classification method, apparatus, electronic device and readable storage medium
CN113961764A (en) Method, device, equipment and storage medium for identifying fraud telephone
CN113064994A (en) Conference quality evaluation method, device, equipment and storage medium
CN113807103A (en) Recruitment method, device, equipment and storage medium based on artificial intelligence
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN113205814B (en) Voice data labeling method and device, electronic equipment and storage medium
CN113344125B (en) Long text matching recognition method and device, electronic equipment and storage medium
CN112579781B (en) Text classification method, device, electronic equipment and medium
CN112992187B (en) Context-based voice emotion detection method, device, equipment and storage medium
CN116468025A (en) Electronic medical record structuring method and device, electronic equipment and storage medium
CN113705201B (en) Text-based event probability prediction evaluation algorithm, electronic device and storage medium
CN113626605B (en) Information classification method, device, electronic equipment and readable storage medium
CN113515591B (en) Text defect information identification method and device, electronic equipment and storage medium
CN114780688A (en) Text quality inspection method, device and equipment based on rule matching and storage medium
CN114186028A (en) Consult complaint work order processing method, device, equipment and storage medium
CN113808616A (en) Voice compliance detection method, device, equipment and storage medium
CN113486238A (en) Information pushing method, device and equipment based on user portrait and storage medium
CN113723114A (en) Semantic analysis method, device and equipment based on multi-intent recognition and storage medium
CN114548825B (en) Complaint work order distortion detection method, device, equipment and storage medium
CN114462411B (en) Named entity recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant