CN111883179B - Emotion voice recognition method based on big data machine learning - Google Patents

Emotion voice recognition method based on big data machine learning Download PDF

Info

Publication number
CN111883179B
CN111883179B CN202010706982.1A CN202010706982A CN111883179B CN 111883179 B CN111883179 B CN 111883179B CN 202010706982 A CN202010706982 A CN 202010706982A CN 111883179 B CN111883179 B CN 111883179B
Authority
CN
China
Prior art keywords
word
key information
big data
emotion
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010706982.1A
Other languages
Chinese (zh)
Other versions
CN111883179A (en
Inventor
徐书婕
袁婧
吴海临
覃建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010706982.1A priority Critical patent/CN111883179B/en
Publication of CN111883179A publication Critical patent/CN111883179A/en
Application granted granted Critical
Publication of CN111883179B publication Critical patent/CN111883179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Hospice & Palliative Care (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an emotion voice recognition method based on big data machine learning, which comprises the steps of S1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes; s2, screening and storing the key information of the pass-through type according to the converted binary codes; s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit; and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model. Compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.

Description

Emotion voice recognition method based on big data machine learning
Technical Field
The invention belongs to the technical field of big data, and particularly relates to an emotion voice recognition method based on big data machine learning.
Background
With the rapid development of the internet, the traditional emotional research route obviously cannot meet the current complex development situation, so that a research method which is efficient and reliable and has certain language identification and logic analysis capabilities is urgently needed to respond to the mission and the acting of personnel management work under a new situation with higher taste and potential energy and deeply expand the algorithm application in a medium fusion scene in the face of the diversification of complex propositions, propagation modes and forms of the global pattern.
Emotion analysis is semantic mining of text, can identify and extract subjective information in original text materials, and can help enterprises know social emotions of brands, products or services while monitoring online conversations. However, analysis of social media streams is typically limited to basic emotional analysis and counting-based metrics. This is as if it were merely swiping the surface, but misses the high value insights waiting to be discovered.
In addition to this, the existing emotion analysis is widely applied to psychology such as inquiries of criminals, psychological assessment of patients, emotion analysis of individuals, and the like.
Currently, the emotion analysis has the following defects:
the instability and the mobility of the emotion are large, no system research which can be followed exists at present, particularly in the research category of the human social subject, the discussion and the measurement of the emotion are generalized, descriptive researches of the background are mostly carried out, and academic rigor and persuasion are lacked;
2. when the relation between emotion and personal emotion experience is checked, the influence factors are lack of accurate quantification, and factors such as diversity of audiences, experience difference, social culture situation and the like are not brought into research;
3. at present, quantification is only carried out manually, the efficiency is low, the error rate is high, and reconstruction analysis is carried out on extracted emotion keywords without a method.
Disclosure of Invention
The invention aims to provide an emotion voice recognition method based on big data machine learning aiming at the defects in the prior art, so as to solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
In order to achieve the purpose, the invention adopts the technical scheme that:
an emotion speech recognition method based on big data machine learning, comprising:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
Preferably, S1 obtains a number of words and audio and converts the words and audio into editable extracted words or binary codes, including word recognition extraction:
Figure BDA0002595171360000021
Figure BDA0002595171360000022
wherein the content of the first and second substances,
Figure BDA0002595171360000023
is a character wiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted character;
and (3) voice recognition and extraction:
Figure BDA0002595171360000024
wherein, p (S) is the probability of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, tt is the total time of the speech length;
preferably, the screening and storing of the pass-through key information according to the converted binary code in S2 includes:
and screening according to the converted binary codes:
Figure BDA0002595171360000031
Figure BDA0002595171360000032
Figure BDA0002595171360000033
wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
Preferably, the step S3 of reading the key information obtained by the look-through screening, and performing language reconstruction and output according to the context constraint includes:
s3.1, inputting keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
Figure BDA0002595171360000034
wherein W1 is a word sequence, P (W)1) The probability of the occurrence of the word W1 is shown as i, the number of the word sequence is shown as n, the length of the word sequence is shown as n, and the omega is the reconstructed output sentence;
and S3.3, outputting and saving the output statement.
Preferably, in S4, based on the big data association model, a correlation coefficient between the key information is calculated:
Figure BDA0002595171360000035
wherein, f (W)1)11The number of times the word W1 appears in both the A and B scenes; f (W)1)00The number of times the word W1 does not appear in both the A and B scenes; f (W)1)01The number of times the word W1 appears in the scene B and does not appear in the scene A; f (W)1)10The times that the word W1 appears in A and does not appear in B scene are shown, t is a time passing coefficient, and tt is the total time of the voice length; f (W)1)1+The number of occurrences of the word W1 in A; f (W)1)+1The number of occurrences of the word W1 in B; f (W)1)0+The number of times the word W1 does not appear in A; f (W)1)+0The number of times the word W1 does not appear in B; phi is a correlation coefficient, the value range is-1 to +1, and if the variables are independent, the value is zero; if positive correlation is greater than zero; if not, is less than zero.
The emotion voice recognition method based on big data machine learning provided by the invention has the following beneficial effects:
according to the machine learning method based on big data, character extraction and language conversion are carried out according to massive sample materials, key information is screened and extracted and is stored in a classified mode, relevant emotion recombination and reanalysis are further achieved according to context, finally, the relevance of emotion keywords or sentences can be obtained based on a big data association model, and emotion analysis research is achieved; compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flow chart of language reconstruction in accordance with the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
According to an embodiment of the application, referring to fig. 1, the method for emotion speech recognition based on big data machine learning of the present scheme includes:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into editable and extractable characters or binary codes;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the search-type screening, and carrying out language reconstruction and output according to the context limit;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
According to an embodiment of the present application, the above steps will be described in detail below;
step S1, acquiring a plurality of characters and audios, and converting the characters and audios into editable and extractable characters or binary codes, wherein the editable and extractable characters or binary codes specifically comprise character recognition extraction and voice conversion recognition;
the character recognition and extraction method comprises the following steps:
Figure BDA0002595171360000051
Figure BDA0002595171360000052
wherein the content of the first and second substances,
Figure BDA0002595171360000053
as a character omegaiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted word.
The voice recognition and extraction method comprises the following steps:
Figure BDA0002595171360000054
where p (S) is the probability of occurrence of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, and tt is the total time of the speech length.
Step S2, screening and storing the critical information of the generic search formula according to the converted binary code, which specifically includes:
extraction and conversion of key information
The emotion itself is not only a result of the presentation of self-emotional expressions, but also a way of presenting a cultural, social emotional standard. According to the emotion research target and requirement, the character codes related to emotion are configured, keyword information such as calmness, excitement, loss, depression, thinking, mildness, sensory pleasure, self-efficiency and the like is extracted, and the keyword information is converted into a binary format.
General search type key information screening and storing method
And screening according to the converted binary code by a screening formula:
Figure BDA0002595171360000061
wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
Step S3, reading the key information obtained by the search-type screening, and performing language reconstruction and output according to the context limitation, which specifically includes:
with reference to figure 2 of the drawings,
s3.1, reading keywords and inputting the keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
Figure BDA0002595171360000062
wherein W1 is a word sequence, P (W)1) I is the word sequence number, n is the word sequence length, and Ω is the reconstructed output sentence, which is the probability of the occurrence of the word W1.
And S3.3, outputting a reconstruction statement.
Step S4, calculating a correlation coefficient between the key information based on the big data association model, which specifically includes:
further analyzing the intrinsic relevance of the emotion keywords by using a big data relevance model, and constructing a relevance model as follows:
Figure BDA0002595171360000071
wherein, f (W)1)11Represents the number of times the word W1 appears in both the A and B scenes; f (W)1)00Represents the number of times the word W1 does not occur in both the A and B scenes; f (W)1)01Representing the number of times the word W1 appears in the B scene, but does not appear in a; f (W)1)10Representing the number of times the word W1 appears in a and does not appear in B scene; f (W)1)1+Represents the number of occurrences of the word W1 in a; f (W)1)+1Represents the number of occurrences of the word W1 in B; f (W)1)0+Represents the number of times the word W1 did not appear in A; f (W)1)+0Representing the number of times the word W1 does not appear in B, t is the time pass coefficient, and tt is the total duration of the speech length.
Phi is a correlation system, the value range is-1 to +1, if the variables are mutually independent, the value is zero, if the positive correlation is larger than zero, and if the positive correlation is not larger than zero, the value is smaller than zero.
According to one embodiment of the present application, the following
The traditional method for analyzing the emotion is based on manual keyword or sentence extraction, is time-consuming and may have extraction holes and lose local keywords. According to experiments, the extraction of a large batch of characters by the traditional method usually needs at least 1 week or more, and the problems can be perfectly solved by applying the algorithm of the invention, so that the phenomenon of keyword or sentence loss does not exist, and the extraction time is short, which is shown in the following table.
TABLE 1 comparison of the conventional method for extracting and recognizing characters with the method of the present invention
Method Time Whether or not to be automatically reconfigured
Conventional extraction method At least 5 days Go wrong
The extraction method of the invention 1h-3h Can be used for
As can be seen from the table, the algorithm greatly shortens the time required by work, liberates manpower, increases the work efficiency, and can be reconstructed according to the extracted key information.
According to the machine learning method based on big data, character extraction and language conversion are carried out according to massive sample materials, key information is screened and extracted and is stored in a classified mode, relevant emotion recombination and reanalysis are further achieved according to context, finally, the relevance of emotion keywords or sentences can be obtained based on a big data association model, and emotion analysis research is achieved; compared with the traditional manual emotion analysis and extraction, the method can effectively solve the problem that the existing manual quantitative analysis is low in efficiency and high in error rate.
The method can identify and extract subjective information in the original text material, and can help enterprises to know the social emotion of brands, products or services of the enterprises while monitoring online conversations. Based on the identification algorithm of the method, the high-value insight of online conversation of enterprises is discovered.
In addition, the present invention can be applied to psychology such as inquiries of criminals, psychological assessment of patients, emotional analysis of individuals, and the like.
While the embodiments of the invention have been described in detail in connection with the accompanying drawings, it is not intended to limit the scope of the invention. Various modifications and changes may be made by those skilled in the art without inventive step within the scope of the appended claims.

Claims (4)

1. An emotion voice recognition method based on big data machine learning is characterized by comprising the following steps:
s1, acquiring a plurality of characters and audios, and converting the characters and the audios into binary codes which can be edited and extracted;
s2, screening and storing the key information of the pass-through type according to the converted binary codes;
s3, reading the key information obtained by the searching screening, and carrying out language reconstruction and output according to the context limitation, including:
s3.1, inputting keywords;
s3.2, carrying out language reconstruction according to context limitation, wherein a reconstruction matrix is as follows:
Figure FDA0003522829440000011
wherein W1 is a word sequence, P (W)1) The probability of the occurrence of the word W1 is shown as i, the number of the word sequence is shown as n, the length of the word sequence is shown as n, and the omega is the reconstructed output sentence;
s3.3, outputting and storing the output statement;
and S4, calculating to obtain a correlation coefficient between the key information based on the big data correlation model.
2. The emotion speech recognition method based on big data machine learning of claim 1, wherein the S1 obtains a plurality of words and audios, and converts the words and audios into editable and extractable binary codes, including word recognition extraction:
Figure FDA0003522829440000012
Figure FDA0003522829440000013
wherein the content of the first and second substances,
Figure FDA0003522829440000014
as a character omegaiTaking a value in the matrix M', wherein i is the length of the character; l is a character range, namely a dictionary; k and l are search coefficients; b is a dividing point; s is the extracted character;
and (3) voice recognition and extraction:
Figure FDA0003522829440000021
where p (S) is the probability of occurrence of the sentence S, W1 is the word sequence, i is the word sequence number, n is the word sequence length, t is the time pass coefficient, and tt is the total time of the speech length.
3. The emotion speech recognition method based on big data machine learning of claim 1, wherein the screening and storing of the pass-through key information according to the transformed binary code in S2 includes:
and screening according to the converted binary codes:
Figure FDA0003522829440000022
then lmin=y+1
Figure FDA0003522829440000023
then lmax=y-1
Figure FDA0003522829440000024
screening key information to
Wherein x is key information, y is searched temporary information, lmaxAnd lminMaximum and minimum information lengths, respectively; and storing the screened key information according to a binary format.
4. The emotion speech recognition method based on big data machine learning of claim 1, wherein in S4, based on the big data association model, the correlation coefficient between the key information is calculated as follows:
Figure FDA0003522829440000025
wherein, f (W)1)11The number of times the word W1 appears in both the A and B scenes; f (W)1)00The number of times the word W1 does not appear in both the A and B scenes; f (W)1)01The number of times the word W1 appears in the scene B and does not appear in the scene A; f (W)1)10The times that the word W1 appears in A and does not appear in B scene are shown, t is a time passing coefficient, and tt is the total time of the voice length; f (W)1)1+The number of occurrences of the word W1 in A; f (W)1)+1The number of occurrences of the word W1 in B; f (W)1)0+The number of times the word W1 does not appear in A; f (W)1)+0The number of times the word W1 does not appear in B; phi is a correlation coefficient, the value range is-1 to +1, and if the variables are independent, the value is zero; if positive correlation is greater than zero; if not, is less than zero.
CN202010706982.1A 2020-07-21 2020-07-21 Emotion voice recognition method based on big data machine learning Active CN111883179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706982.1A CN111883179B (en) 2020-07-21 2020-07-21 Emotion voice recognition method based on big data machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706982.1A CN111883179B (en) 2020-07-21 2020-07-21 Emotion voice recognition method based on big data machine learning

Publications (2)

Publication Number Publication Date
CN111883179A CN111883179A (en) 2020-11-03
CN111883179B true CN111883179B (en) 2022-04-15

Family

ID=73155057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706982.1A Active CN111883179B (en) 2020-07-21 2020-07-21 Emotion voice recognition method based on big data machine learning

Country Status (1)

Country Link
CN (1) CN111883179B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008092473A1 (en) * 2007-01-31 2008-08-07 Telecom Italia S.P.A. Customizable method and system for emotional recognition
CN106598948A (en) * 2016-12-19 2017-04-26 杭州语忆科技有限公司 Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder
CN108763219A (en) * 2018-06-06 2018-11-06 安徽继远软件有限公司 Speech emotional analysis method based on CNN-RSC combinatorial optimization algorithms
CN109460737A (en) * 2018-11-13 2019-03-12 四川大学 A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
WO2019119279A1 (en) * 2017-12-19 2019-06-27 Wonder Group Technologies Ltd. Method and apparatus for emotion recognition from speech
WO2019180452A1 (en) * 2018-03-21 2019-09-26 Limbic Limited Emotion data training method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9747573B2 (en) * 2015-03-23 2017-08-29 Avatar Merger Sub II, LLC Emotion recognition for workforce analytics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008092473A1 (en) * 2007-01-31 2008-08-07 Telecom Italia S.P.A. Customizable method and system for emotional recognition
CN106598948A (en) * 2016-12-19 2017-04-26 杭州语忆科技有限公司 Emotion recognition method based on long-term and short-term memory neural network and by combination with autocoder
WO2019119279A1 (en) * 2017-12-19 2019-06-27 Wonder Group Technologies Ltd. Method and apparatus for emotion recognition from speech
WO2019180452A1 (en) * 2018-03-21 2019-09-26 Limbic Limited Emotion data training method and system
CN108763219A (en) * 2018-06-06 2018-11-06 安徽继远软件有限公司 Speech emotional analysis method based on CNN-RSC combinatorial optimization algorithms
CN109460737A (en) * 2018-11-13 2019-03-12 四川大学 A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Detecting Sensitive Information of Unstructured Text Using Convolutional Neural Network;Guosheng xu et al;《2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery》;20200102;全文 *
Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition;Jun Deng et al;《2013 Humaine Association Conference on Affective Computing and Intelligent Interaction》;20131212;全文 *
情感语音的非线性特征提取及特征优化的研究;宋春晓;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20181015;全文 *

Also Published As

Publication number Publication date
CN111883179A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
CN108255805B (en) Public opinion analysis method and device, storage medium and electronic equipment
CN107766324B (en) Text consistency analysis method based on deep neural network
US20120209606A1 (en) Method and apparatus for information extraction from interactions
KR102041621B1 (en) System for providing artificial intelligence based dialogue type corpus analyze service, and building method therefor
CN113505586A (en) Seat-assisted question-answering method and system integrating semantic classification and knowledge graph
CN109446337B (en) Knowledge graph construction method and device
CN113094578A (en) Deep learning-based content recommendation method, device, equipment and storage medium
CN111191051B (en) Method and system for constructing emergency knowledge map based on Chinese word segmentation technology
CN113033183B (en) Network new word discovery method and system based on statistics and similarity
CN116628173B (en) Intelligent customer service information generation system and method based on keyword extraction
CN109783623A (en) The data analysing method of user and customer service dialogue under a kind of real scene
CN112069312A (en) Text classification method based on entity recognition and electronic device
CN103885924A (en) Field-adaptive automatic open class subtitle generating system and field-adaptive automatic open class subtitle generating method
CN111159405B (en) Irony detection method based on background knowledge
CN111737424A (en) Question matching method, device, equipment and storage medium
CN110347812A (en) A kind of search ordering method and system towards judicial style
CN114036907A (en) Text data amplification method based on domain features
CN111191413B (en) Method, device and system for automatically marking event core content based on graph sequencing model
CN111883179B (en) Emotion voice recognition method based on big data machine learning
CN111736804A (en) Method and device for identifying App key function based on user comment
Khasanova et al. Developing a production system for Purpose of Call detection in business phone conversations
CN112668284B (en) Legal document segmentation method and system
CN113590768B (en) Training method and device for text relevance model, question answering method and device
CN112632985A (en) Corpus processing method and device, storage medium and processor
CN112836517A (en) Method for processing mining risk signal based on natural language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant