CN111883179A - Emotion voice recognition method based on big data machine learning - Google Patents
Emotion voice recognition method based on big data machine learning Download PDFInfo
- Publication number
- CN111883179A CN111883179A CN202010706982.1A CN202010706982A CN111883179A CN 111883179 A CN111883179 A CN 111883179A CN 202010706982 A CN202010706982 A CN 202010706982A CN 111883179 A CN111883179 A CN 111883179A
- Authority
- CN
- China
- Prior art keywords
- word
- key information
- big data
- emotion
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000010801 machine learning Methods 0.000 title claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 238000012216 screening Methods 0.000 claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims 1
- 238000004458 analytical method Methods 0.000 abstract description 14
- 238000004445 quantitative analysis Methods 0.000 abstract description 4
- 238000011160 research Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000002996 emotional effect Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 238000012950 reanalysis Methods 0.000 description 2
- 238000005215 recombination Methods 0.000 description 2
- 230000006798 recombination Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005381 potential energy Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Hospice & Palliative Care (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychiatry (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an emotion voice recognition method based on big data machine learning, which comprises the steps of S1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes; s2, screening and storing the key information of the pass-through type according to the converted binary codes; s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit; and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model. Compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
Description
Technical Field
The invention belongs to the technical field of big data, and particularly relates to an emotion voice recognition method based on big data machine learning.
Background
With the rapid development of the internet, the traditional emotional research route obviously cannot meet the current complex development situation, so that a research method which is efficient and reliable and has certain language identification and logic analysis capabilities is urgently needed to respond to the mission and the acting of personnel management work under a new situation with higher taste and potential energy and deeply expand the algorithm application in a medium fusion scene in the face of the diversification of complex propositions, propagation modes and forms of the global pattern.
Emotion analysis is semantic mining of text, can identify and extract subjective information in original text materials, and can help enterprises know social emotions of brands, products or services while monitoring online conversations. However, analysis of social media streams is typically limited to basic emotional analysis and counting-based metrics. This is as if it were merely swiping the surface, but misses the high value insights waiting to be discovered.
In addition to this, the existing emotion analysis is widely applied to psychology such as inquiries of criminals, psychological assessment of patients, emotion analysis of individuals, and the like.
Currently, the emotion analysis has the following defects:
the instability and the mobility of the emotion are large, no system research which can be followed exists at present, particularly in the research category of the human social subject, the discussion and the measurement of the emotion are generalized, descriptive researches of the background are mostly carried out, and academic rigor and persuasion are lacked;
2. when the relation between emotion and personal emotion experience is checked, the influence factors are lack of accurate quantification, and factors such as diversity of audiences, experience difference, social culture situation and the like are not brought into research;
3. at present, quantification is only carried out manually, the efficiency is low, the error rate is high, and reconstruction analysis is carried out on extracted emotion keywords without a method.
Disclosure of Invention
The invention aims to provide an emotion voice recognition method based on big data machine learning aiming at the defects in the prior art, so as to solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
In order to achieve the purpose, the invention adopts the technical scheme that:
an emotion speech recognition method based on big data machine learning, comprising:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
Preferably, S1 obtains a number of words and audio and converts the words and audio into editable extracted words or binary codes, including word recognition extraction:
wherein the content of the first and second substances,is a character wiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted character;
and (3) voice recognition and extraction:
wherein, p (S) is the probability of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, tt is the total time of the speech length;
preferably, the screening and storing of the pass-through key information according to the converted binary code in S2 includes:
and screening according to the converted binary codes:
wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
Preferably, the step S3 of reading the key information obtained by the look-through screening, and performing language reconstruction and output according to the context constraint includes:
s3.1, inputting keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
wherein W1 is a word sequence, P (W)1) The probability of the occurrence of the word W1 is shown as i, the number of the word sequence is shown as n, the length of the word sequence is shown as n, and the omega is the reconstructed output sentence;
and S3.3, outputting and saving the output statement.
Preferably, in S4, based on the big data association model, a correlation coefficient between the key information is calculated:
wherein, f (W)1)11The number of times the word W1 appears in both the A and B scenes; f (W)1)00In the A and B scenarios for word W1Times of simultaneous absence in the sample; f (W)1)01The number of times the word W1 appears in the scene B and does not appear in the scene A; f (W)1)10The times that the word W1 appears in A and does not appear in B scene are shown, t is a time passing coefficient, and tt is the total time of the voice length; f (W)1)1+The number of occurrences of the word W1 in A; f (W)1)+1The number of occurrences of the word W1 in B; f (W)1)0+The number of times the word W1 does not appear in A; f (W)1)+0The number of times the word W1 does not appear in B; phi is a correlation coefficient, the value range is-1 to +1, and if the variables are independent, the value is zero; if positive correlation is greater than zero; if not, is less than zero.
The emotion voice recognition method based on big data machine learning provided by the invention has the following beneficial effects:
according to the machine learning method based on big data, character extraction and language conversion are carried out according to massive sample materials, key information is screened and extracted and is stored in a classified mode, relevant emotion recombination and reanalysis are further achieved according to context, finally, the relevance of emotion keywords or sentences can be obtained based on a big data association model, and emotion analysis research is achieved; compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of language reconstruction in accordance with the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the application, referring to fig. 1, the method for emotion speech recognition based on big data machine learning of the present scheme includes:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
According to an embodiment of the present application, the above steps will be described in detail below;
step S1, acquiring a plurality of characters and audios, and converting the characters and audios into editable and extractable characters or binary codes, wherein the editable and extractable characters or binary codes specifically comprise character recognition extraction and voice conversion recognition;
the character recognition and extraction method comprises the following steps:
wherein the content of the first and second substances,as a character omegaiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted word.
The voice recognition and extraction method comprises the following steps:
where p (S) is the probability of occurrence of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, and tt is the total time of the speech length.
Step S2, screening and storing the critical information of the generic search formula according to the converted binary code, which specifically includes:
extraction and conversion of key information
The emotion itself is not only a result of the presentation of self-emotional expressions, but also a way of presenting a cultural, social emotional standard. According to the emotion research target and requirement, the character codes related to emotion are configured, keyword information such as calmness, excitement, loss, depression, thinking, mildness, sensory pleasure, self-efficiency and the like is extracted, and the keyword information is converted into a binary format.
General search type key information screening and storing method
And screening according to the converted binary code by a screening formula:
wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
Step S3, reading the key information obtained by the search-type screening, and performing language reconstruction and output according to the context limitation, which specifically includes:
with reference to figure 2 of the drawings,
s3.1, reading keywords and inputting the keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
wherein W1 is a word sequence, P (W)1) Is the probability of the occurrence of the word W1, i is the word sequence number, n is the word sequence length, and Ω is the reconstruction outputA statement.
And S3.3, outputting a reconstruction statement.
Step S4, calculating a correlation coefficient between the key information based on the big data association model, which specifically includes:
further analyzing the intrinsic relevance of the emotion keywords by using a big data relevance model, and constructing a relevance model as follows:
wherein, f (W)1)11Represents the number of times the word W1 appears in both the A and B scenes; f (W)1)00Represents the number of times the word W1 does not occur in both the A and B scenes; f (W)1)01Representing the number of times the word W1 appears in the B scene, but does not appear in a; f (W)1)10Representing the number of times the word W1 appears in a and does not appear in B scene; f (W)1)1+Represents the number of occurrences of the word W1 in a; f (W)1)+1Represents the number of occurrences of the word W1 in B; f (W)1)0+Represents the number of times the word W1 did not appear in A; f (W)1)+0Representing the number of times the word W1 does not appear in B, t is the time pass coefficient, and tt is the total duration of the speech length.
Phi is a correlation system, the value range is-1 to +1, if the variables are mutually independent, the value is zero, if the positive correlation is larger than zero, and if the positive correlation is not larger than zero, the value is smaller than zero.
According to one embodiment of the present application, the following
The traditional method for analyzing the emotion is based on manual keyword or sentence extraction, is time-consuming and may have extraction holes and lose local keywords. According to experiments, the extraction of a large batch of characters by the traditional method usually needs at least 1 week or more, and the problems can be perfectly solved by applying the algorithm of the invention, so that the phenomenon of keyword or sentence loss does not exist, and the extraction time is short, which is shown in the following table.
TABLE 1 comparison of the conventional method for extracting and recognizing characters with the method of the present invention
Method of producing a composite material | Time of day | Whether or not to be automatically reconfigured |
Conventional extraction method | At least 5 days | Go wrong |
The extraction method of the invention | 1h-3h | Can be used for |
As can be seen from the table, the algorithm greatly shortens the time required by work, liberates manpower, increases the work efficiency, and can be reconstructed according to the extracted key information.
According to the machine learning method based on big data, character extraction and language conversion are carried out according to massive sample materials, key information is screened and extracted and is stored in a classified mode, relevant emotion recombination and reanalysis are further achieved according to context, finally, the relevance of emotion keywords or sentences can be obtained based on a big data association model, and emotion analysis research is achieved; compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
The method can identify and extract subjective information in the original text material, and can help enterprises to know the social emotion of brands, products or services of the enterprises while monitoring online conversations. Based on the identification algorithm of the method, the high-value insight of online conversation of enterprises is discovered.
In addition, the present invention can be applied to psychology such as inquiries of criminals, psychological assessment of patients, emotional analysis of individuals, and the like.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.
Claims (5)
1. An emotion voice recognition method based on big data machine learning is characterized by comprising the following steps:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
2. The emotion speech recognition method based on big data machine learning of claim 1, wherein the S1 obtains a plurality of words and audios, and converts the words and audios into editable and extractable words or binary codes, including word recognition extraction:
wherein the content of the first and second substances,as a character omegaiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted character;
and (3) voice recognition and extraction:
where p (S) is the probability of occurrence of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, and tt is the total time of the speech length.
3. The emotion speech recognition method based on big data machine learning of claim 1, wherein the screening and storing of the pass-through key information according to the transformed binary code in S2 includes:
and screening according to the converted binary codes:
Wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
4. The emotion speech recognition method based on big data machine learning of claim 1, wherein the step S3 of reading the key information obtained by the manifold filtering, and performing language reconstruction and output according to the context constraint includes:
s3.1, inputting keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
wherein W1 is a word sequence, P (W)1) The probability of the occurrence of the word W1 is shown as i, the number of the word sequence is shown as n, the length of the word sequence is shown as n, and the omega is the reconstructed output sentence;
and S3.3, outputting and saving the output statement.
5. The emotion speech recognition method based on big data machine learning of claim 1, wherein in S4, based on the big data association model, the correlation coefficient between the key information is calculated as follows:
wherein, f (W)1)11The number of times the word W1 appears in both the A and B scenes; f (W)1)00The number of times the word W1 does not appear in both the A and B scenes; f (W)1)01The number of times the word W1 appears in the scene B and does not appear in the scene A; f (W)1)10The times that the word W1 appears in A and does not appear in B scene are shown, t is a time passing coefficient, and tt is the total time of the voice length; f (W)1)1+The number of occurrences of the word W1 in A; f (W)1)+1The number of occurrences of the word W1 in B; f (W)1)0+The number of times the word W1 does not appear in A; f (W)1)+0The number of times the word W1 does not appear in B; phi is a correlation coefficient, the value range is-1 to +1, and if the variables are independent, the value is zero; if positive correlation is greater than zero; if not, is less than zero.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010706982.1A CN111883179B (en) | 2020-07-21 | 2020-07-21 | Emotion voice recognition method based on big data machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010706982.1A CN111883179B (en) | 2020-07-21 | 2020-07-21 | Emotion voice recognition method based on big data machine learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111883179A true CN111883179A (en) | 2020-11-03 |
CN111883179B CN111883179B (en) | 2022-04-15 |
Family
ID=73155057
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010706982.1A Active CN111883179B (en) | 2020-07-21 | 2020-07-21 | Emotion voice recognition method based on big data machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111883179B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008092473A1 (en) * | 2007-01-31 | 2008-08-07 | Telecom Italia S.P.A. | Customizable method and system for emotional recognition |
US20150193718A1 (en) * | 2015-03-23 | 2015-07-09 | Looksery, Inc. | Emotion recognition for workforce analytics |
CN106598948A (en) * | 2016-12-19 | 2017-04-26 | 杭州语忆科技有限公司 | Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder |
CN108763219A (en) * | 2018-06-06 | 2018-11-06 | 安徽继远软件有限公司 | Speech emotional analysis method based on CNN-RSC combinatorial optimization algorithms |
CN109460737A (en) * | 2018-11-13 | 2019-03-12 | 四川大学 | A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network |
WO2019119279A1 (en) * | 2017-12-19 | 2019-06-27 | Wonder Group Technologies Ltd. | Method and apparatus for emotion recognition from speech |
WO2019180452A1 (en) * | 2018-03-21 | 2019-09-26 | Limbic Limited | Emotion data training method and system |
-
2020
- 2020-07-21 CN CN202010706982.1A patent/CN111883179B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008092473A1 (en) * | 2007-01-31 | 2008-08-07 | Telecom Italia S.P.A. | Customizable method and system for emotional recognition |
US20150193718A1 (en) * | 2015-03-23 | 2015-07-09 | Looksery, Inc. | Emotion recognition for workforce analytics |
CN106598948A (en) * | 2016-12-19 | 2017-04-26 | 杭州语忆科技有限公司 | Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder |
WO2019119279A1 (en) * | 2017-12-19 | 2019-06-27 | Wonder Group Technologies Ltd. | Method and apparatus for emotion recognition from speech |
WO2019180452A1 (en) * | 2018-03-21 | 2019-09-26 | Limbic Limited | Emotion data training method and system |
CN108763219A (en) * | 2018-06-06 | 2018-11-06 | 安徽继远软件有限公司 | Speech emotional analysis method based on CNN-RSC combinatorial optimization algorithms |
CN109460737A (en) * | 2018-11-13 | 2019-03-12 | 四川大学 | A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network |
Non-Patent Citations (3)
Title |
---|
GUOSHENG XU ET AL: "Detecting Sensitive Information of Unstructured Text Using Convolutional Neural Network", 《2019 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY》 * |
JUN DENG ET AL: "Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition", 《2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION》 * |
宋春晓: "情感语音的非线性特征提取及特征优化的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111883179B (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108255805B (en) | Public opinion analysis method and device, storage medium and electronic equipment | |
CN110675288B (en) | Intelligent auxiliary judgment method, device, computer equipment and storage medium | |
US20120209606A1 (en) | Method and apparatus for information extraction from interactions | |
CN113094578A (en) | Deep learning-based content recommendation method, device, equipment and storage medium | |
CN110362819A (en) | Text emotion analysis method based on convolutional neural networks | |
CN111191051B (en) | Method and system for constructing emergency knowledge map based on Chinese word segmentation technology | |
KR20200119410A (en) | System and Method for Recognizing Emotions from Korean Dialogues based on Global and Local Contextual Information | |
CN106776832A (en) | Processing method, apparatus and system for question and answer interactive log | |
CN109446337B (en) | Knowledge graph construction method and device | |
CN103885924A (en) | Field-adaptive automatic open class subtitle generating system and field-adaptive automatic open class subtitle generating method | |
CN111159405B (en) | Irony detection method based on background knowledge | |
CN110196897B (en) | Case identification method based on question and answer template | |
CN115272533A (en) | Intelligent image-text video conversion method and system based on video structured data | |
CN111737424A (en) | Question matching method, device, equipment and storage medium | |
CN110347812A (en) | A kind of search ordering method and system towards judicial style | |
CN116628173B (en) | Intelligent customer service information generation system and method based on keyword extraction | |
CN111191413B (en) | Method, device and system for automatically marking event core content based on graph sequencing model | |
CN111883179B (en) | Emotion voice recognition method based on big data machine learning | |
CN112069402A (en) | Personalized comment recommendation method based on emotion and graph convolution neural network | |
CN111736804A (en) | Method and device for identifying App key function based on user comment | |
Khasanova et al. | Developing a production system for Purpose of Call detection in business phone conversations | |
CN112668284B (en) | Legal document segmentation method and system | |
CN112632985A (en) | Corpus processing method and device, storage medium and processor | |
CN115795057B (en) | Audit knowledge processing method and system based on AI technology | |
CN113642321B (en) | Financial field-oriented causal relationship extraction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |