CN116955539B - Content compliance judging method based on thinking chain reasoning implicit generation - Google Patents

Content compliance judging method based on thinking chain reasoning implicit generation Download PDF

Info

Publication number
CN116955539B
CN116955539B CN202311192177.1A CN202311192177A CN116955539B CN 116955539 B CN116955539 B CN 116955539B CN 202311192177 A CN202311192177 A CN 202311192177A CN 116955539 B CN116955539 B CN 116955539B
Authority
CN
China
Prior art keywords
text
language model
subject
scale language
inquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311192177.1A
Other languages
Chinese (zh)
Other versions
CN116955539A (en
Inventor
顾钊铨
梁栩健
肖洪涛
谭昊
张欢
李鉴明
廖清
高翠芸
徐国爱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202311192177.1A priority Critical patent/CN116955539B/en
Publication of CN116955539A publication Critical patent/CN116955539A/en
Application granted granted Critical
Publication of CN116955539B publication Critical patent/CN116955539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a content compliance judging method based on implicit thinking chain reasoning, which comprises the following steps: step one: inputting a text X with unknown safety into a large-scale language model M; step two: inquiring main and guest components in the security unknown text X of the large-scale language model M to obtain a main text S and a guest text T; step three: inquiring potential views of the large-scale language model M to obtain a potential view text O; step four: inquiring whether the intention expressed by the security unknown text X of the large-scale language model M accords with the specification according to the potential viewpoint text O obtained in the step three, and outputting if so: secure, otherwise output: not safe. The beneficial effects of the invention are as follows: the invention reasonably prompts the large-scale language model to carry out chained reasoning by well utilizing the common sense deducing capability of the large-scale language model and expert knowledge in the specific field, gradually reveals deep text hidden semantics, and greatly improves the performance of the system text safety detection system.

Description

Content compliance judging method based on thinking chain reasoning implicit generation
Technical Field
The invention relates to the field of data processing, in particular to a method for implicitly generating content compliance judgment based on thinking chain reasoning.
Background
Text security detection an important research direction in the field of natural language processing is aimed at detecting whether text complies with regulations and morals. Most text security tests can be classified into explicit text security tests and implicit text security tests depending on whether a filter term is present. Among them, explicit text security detection currently takes up the mainstream due to simplicity and low cost. However, implicit text security detection is more challenging due to the lack of explicit filter words as a criterion. For example, given a sentence: "I feel your mind is a good ornament". No explicit filter word appears in the sentence.
From a human perspective, it is not difficult to derive the meaning that the sentence is hidden from a life attack. Continuing with the example set forth above, the "brain" of the subject section refers to an important organ of the human body for controlling various activities of the human body. While the "trim" of the object portion refers to an object that is aesthetically pleasing, intended for display only, and generally of no practical use. We can get one such chain of thinking: "your brain is important""decoration is unimportant">"your mind is not useful" it can be concluded that the text is in existence for a life attack. From the ideas (thought chain) provided above, it can be appreciated why conventional text security detection often presents this high false positive rate, as their shallow limitations of the model make it difficult to infer the underlying semantics. Recent studies have found that the Chain of Thought (CoT) is a large scaleLanguage model capabilities far surpass one interpretation of traditional language models.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for implicitly generating content compliance judgment based on thinking chain reasoning.
The invention provides a content compliance judging method based on implicit thinking chain reasoning, which comprises the following steps:
step one: inputting a text X with unknown safety into a large-scale language model M;
step two: inquiring main and guest components in the security unknown text X of the large-scale language model M to obtain a main text S and a guest text T;
step three: inquiring potential views of the large-scale language model M according to the subject text S and the object text T obtained in the step two to obtain potential view text O output by the large-scale language model M;
step four: inquiring whether the intention expressed by the text X with unknown safety of the large-scale language model M meets the specification or not according to the potential viewpoint text O obtained in the step three, and outputting if the intention meets the specification: secure, otherwise output: not safe.
As a further improvement of the present invention, the second step includes:
a main body inquiring step: inquiring main language components or main role components in unknown texts X of the large-scale language model M to obtain main texts S output by the large-scale language model M;
object inquiry step: and inquiring object components, purposes or objects in the unknown text X of the large-scale language model M to obtain object text T output by the large-scale language model M.
In the second step, a subject inquiring step is performed first, and then a subject inquiring step is performed.
As a further improvement of the present invention, in the subject query step, subject text S output by the large-scale language model M is obtained using the subject query template in the first-step prompt template to query subjects or angles or topics in the text XThe steps are as follows:the method comprises the steps of carrying out a first treatment on the surface of the In the object inquiring step, an object inquiring template in the first step prompting template is used for inquiring objects or purposes in a text X, and an object text T output by a large-scale language model M is obtained, wherein the step is expressed as follows: />
As a further improvement of the present invention, the step two and the step three further include the following steps:
dictionary filtering: inputting the subject text S and the object text T obtained in the step two into a filtering word dictionary, judging whether the text and other entities possibly referred to meet the specification or not through the filtering word dictionary, and if not, directly outputting a result: unsafe, the process ends, if the specification is met, the possible association of the subject S and the object array T is output to the large-scale language model M, and then step three is performed.
As a further improvement of the present invention, in the dictionary filtering step, the filtered word dictionary includes a filtered word relation database, a knowledge graph, and an entity relation determiner, and the filtered word dictionary outputs the subject text S and the object text T possibly associated with the subject and object arrays to the large-scale language model M:and->The associated array length is less than 5.
As a further improvement of the present invention, in the third step, a second step of prompting template interrogation is used: "given text X, the subject of the text may refer toThe object may mean->What the text is potentially looking at ", obtaining the potentially looking at text O output by the large-scale language model M, this step is expressed as: />
As a further improvement of the present invention, between said third step and said fourth step, the following steps are performed:
confidence judging step: and (3) inputting the potential viewpoint text O obtained in the step (III) into an integrated security classifier, judging by the integrated security classifier according to a set confidence level threshold, judging the potential viewpoint text O as an unsatisfactory text if the confidence level is higher than the set confidence level threshold, outputting the unsafe text, ending the process, and executing the step (IV) if the confidence level is lower than the set confidence level threshold.
As a further improvement of the present invention, in the fourth step, a query is made using a third step alert template: "given text X, the subject of the text may refer toThe object may mean->Is the underlying view text O, the intent of the text expression is in compliance with the specification? Please output yes or no; this step can be expressed as: />
As a further improvement of the present invention, after the step four is performed, the following steps are performed:
step five: and collecting the unknown safety text X, the subject S, the object T and the potential viewpoint text O, returning to the manual annotation platform, and performing reinforcement learning optimization of human feedback on the large model language model M.
As a further development of the invention, the method further comprises the step of, before performing said step one:
the preparation steps are as follows: legal and ethical knowledge involved in text security was first introduced by human feedback in a reinforcement learning manner.
The beneficial effects of the invention are as follows: the invention reasonably prompts the large-scale language model to carry out chain type reasoning by utilizing the common sense inference capability of the large-scale language model and expert knowledge (comprising a filtered word dictionary and an integrated safety classifier) in the specific field, gradually reveals deep text hidden semantics, and greatly improves the performance of a system text safety detection system.
Drawings
FIG. 1 is a flow chart of a method of implicitly generating content compliance decision based on mental chain reasoning in accordance with the present invention.
Detailed Description
In the text security detection, the implicit text security detection performance of the system is further improved by combining accurate information brought by experts and reliable databases in specific fields through the thinking chain capability of a large model, namely the reasoning capability and the context learning capability.
The invention provides a content compliance judging method based on implicit thinking chain reasoning, which comprises the following three reasoning steps: respectively deduce: 1) Grammatical components in text include subject portions and object portions (object is mandatory); 2) Deducing an array of subject parts and object parts according to expert knowledge in a specific field, such as a relational database, a dictionary, a knowledge graph and the like; 3) Final text security. Through the gradual reasoning from the whole to the details and from the simple to the complex, the deep view of the text is gradually revealed, so that more accurate text security detection is obtained.
Given a security unknown text X, a large-scale language model M, a filtered word dictionary (including filtered word relational databases, knowledge maps, entity-relationship judger, etc., optional), a simple text security classification model C (optional). The output of the invention is the judgment of whether the text meets the specification. With reference to fig. 1, the method for implicitly generating content compliance decision based on thought chain reasoning provides a three-step Prompt template (Prompt) as follows:
step one: inputting a text X with unknown safety into a large-scale language model M;
step two: inquiring main and guest components in the security unknown text X of the large-scale language model M to obtain a main text S and a guest text T;
the second step comprises:
a main body inquiring step: inquiring main language components or main role components in unknown texts X of the large-scale language model M to obtain main texts S output by the large-scale language model M;
object inquiry step: and inquiring object components, purposes or objects in the unknown text X of the large-scale language model M to obtain object text T output by the large-scale language model M.
In the subject query step, the subject query template in the first step prompt template C1 is used: "given text X, subject or corner or topic in text is? ", obtaining the main text S output by the large-scale language model M, which is expressed as:
in the object inquiry step, the object inquiry template in the first step alert template C1 is used: "given text X, object or purpose in text is? ", obtaining a guest text T output by the large-scale language model M, wherein the step is expressed as follows:
preferably, the subject interrogation step is performed first, and then the object interrogation step is performed.
Step three: inquiring potential views of the large-scale language model M based on the obtained subject text S and the object text T, wherein the aim is to enable the large-scale language model M to analyze the relation between the specified subject and object prompts;
in step three, the second step is used to prompt template C2 interrogation: "given text X, the subject of the text may refer toThe object may mean->What is the underlying view of the text? "obtaining a potential perspective text O output by the large-scale language model M, the step being expressed as: />
Step four: inquiring whether the intention expressed by the text X with unknown safety of the large-scale language model M meets the specification or not according to the potential viewpoint text O obtained in the step three, and outputting if the intention meets the specification: secure, otherwise output: not safe.
In step four, the potential perspective text O of step three is obtained and query is made to the large-scale language model M using the third step hint template C3: "given text X, the subject of the text may refer toThe object may mean->Is the underlying view text O, the intent of the text expression is in compliance with the specification? Please output yes or no. "; this step can be expressed as:
as a further optimization, the steps between the second step and the third step further comprise the following steps:
dictionary filtering: inputting the subject text S and the object text T obtained in the second step into a filtered word dictionary, judging whether the text (usually an entry) and possibly other indicated entities meet the specification or not through the filtered word dictionary, and if not, directly outputting a result: unsafe, the process ends, if the specification is met, the possibly associated subject/object array is output to the large-scale language model M, and then step three is performed. Filtering the word and other entities to which the word may refer, for example: donkey-hide gelatin.
In the dictionary filtering step, the filtered word dictionary includes a filtered word relation database, a knowledge graph and an entity relation judging device, and outputs a subject text S and a subject text T which are possibly associated with a subject and a subject array to the large-scale language model M:and->The associated array length is less than 5.
As a further optimization, the following steps are also performed between the third step and the fourth step:
confidence judging step: and (3) inputting the potential viewpoint text O obtained in the step (III) into an integrated security classifier, judging by the integrated security classifier according to a set confidence level threshold, judging the potential viewpoint text O as an unconformity text if the confidence level is higher than the set confidence level threshold, outputting the unconformity text, ending the process, and executing the step (IV) if the confidence level is lower than the set confidence level threshold.
The integrated security classifier is C in the small-scale text security classifier.
As a further optimization, the method further comprises the following steps before performing the first step:
the preparation steps are as follows: legal and ethical knowledge involved in text security was first introduced by human feedback in a reinforcement learning manner. This step belongs to the selectable option, depending on the capabilities of the large language model M.
As a further optimization, after the fourth step is performed, the following steps are performed:
step five: and collecting the unknown safety text X, the subject S, the object T and the potential viewpoint text O, returning to the manual annotation platform, and performing reinforcement learning optimization of human feedback on the large model language model M.
The following text "hundreds of searches you can't, but search dogs must" can take as an example, take ChatGPT as the goal big language model, the minimum confidence threshold 75% of the small-scale model set analyzes:
the first step: consulting a large-scale language model M: "do you 'hundred degrees search for a given text, but do you' do the dog search, subject or corner or topic in the text? "get large-scale language model M reply: "subject or principal angle or topic in text is" search dog ";
and a second step of: consulting a large-scale language model M: "given text 'hundred degrees does you go short, but the dog must be able' the object or purpose in the text is? "get large-scale language model M reply: "subject or principal angle or topic in text is 'you'";
and a third step of: inquiring a dictionary and a knowledge graph: the subject "search dog" may refer to s= [ search engine, dog, vendor ], and the object o= [ you ]. Inquiring a dictionary, a relation knowledge graph, a sensitive word judging device and the like to find that the sensitive word is not involved, and then carrying out the next step;
fourth step: consulting a large-scale language model M: "given text 'does not' hundred 'search you, but the dog must be' and the subject of the text may refer to the search engine, dog and dog vendor, the object may refer to you, so what is the underlying point of view of the text? "get large-scale language model M reply: "if the subject is a dog and the object is you, then the underlying view of the text may be to use metaphors and metaphors to express someone's detraction or disfigurement to the audience. "
Fifth step: if the subject is a search engine, dog or dog vendor, object is you, then the underlying view of the text may be to use metaphors and metaphors to express someone's detraction or counter-productive insult to the audience. "input into a sensitive word filter or small-scale text security classifier C; the small-scale text safety classifier C can judge the text as the non-compliant text with 90% confidence, and exceeds the set confidence threshold by 75%, the judgment is reliable, so that the text is ' hundred-degree searched for you, but the dog searching can be ' finally judged as the non-compliant text ', and the process is ended;
if the judgment confidence of the small-scale text safety classifier C in the fifth step is lower than the threshold value, the fourth step can be continuously executed; consulting a large-scale language model M: "given a text 'hundred searches do not reach you, but a dog search must be possible, the subject of the text may refer to a search engine, dog and dog vendor, the object may refer to you, the potential point of view is that if the subject is a search engine, dog or dog vendor, the object is you, then the potential point of view of the text may be to express someone's detraction or insult to the audience using metaphors and metaphors. Is the intent of the text expression in compliance with the specification? Please output yes or no; obtaining a large model language model M output: no; and finally, judging the text to be a non-compliant text, and ending the flow.
The beneficial effects of the invention are as follows: through a thinking chain, the invention well utilizes the common sense inference capability of the large-scale language model M and expert knowledge (comprising a filtering word dictionary and an integrated safety classifier) in the specific field, reasonably prompts the large-scale language model M to carry out chain type inference, gradually reveals deep text hidden semantics, and greatly improves the performance of a system text safety detection system.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (4)

1. A method for implicitly generating content compliance decision based on mental chain reasoning, comprising the steps of:
step one: inputting a text X with unknown safety into a large-scale language model M;
step two: inquiring main and guest components in the security unknown text X of the large-scale language model M to obtain a main text S and a guest text T;
step three: inquiring potential views of the large-scale language model M according to the subject text S and the object text T obtained in the step two to obtain potential view text O output by the large-scale language model M;
step four: inquiring whether the intention expressed by the text X with unknown safety of the large-scale language model M meets the specification or not according to the potential viewpoint text O obtained in the step three, and outputting if the intention meets the specification: secure, otherwise output: unsafe to use; the second step comprises the following steps:
a main body inquiring step: inquiring main language components or main role components in unknown texts X of the large-scale language model M to obtain main texts S output by the large-scale language model M;
object inquiry step: inquiring object components, objects or objects in the unknown text X of the large-scale language model M to obtain an object text T output by the large-scale language model M;
the method further comprises the following steps between the second step and the third step:
dictionary filtering: inputting the subject text S and the object text T obtained in the step two into a filtering word dictionary, judging whether the text and other entities possibly referred to meet the specification or not through the filtering word dictionary, and if not, directly outputting a result: if the specification is met, outputting a possible association subject S and an object array T to the large-scale language model M, and then executing a step III;
in the dictionary filtering step, the filtered word dictionary includes a filtered word relation database, a knowledge graph, and an entity relation determiner, and the filtered word dictionary outputs a subject text S and a subject text T, which may be associated with a subject and a subject array, to the large-scale language model M:and->The length of the associated array is less than 5;
the method further comprises the following steps between the third step and the fourth step:
confidence judging step: inputting the potential viewpoint text O obtained in the step three into an integrated security classifier, judging by the integrated security classifier according to a set confidence level threshold, judging the potential viewpoint text O as an unsatisfactory text if the confidence level is higher than the set confidence level threshold, outputting the unsafe text, ending the process, and executing the step four if the confidence level is lower than the set confidence level threshold;
in the subject query step, subjects or angles or subjects in the text X are queried using a subject query template to obtain a subject text S output by the large-scale language model M, which is expressed as:the method comprises the steps of carrying out a first treatment on the surface of the In the object inquiring step, an object or object in the text X is inquired by using an object inquiring template to obtain an object text T output by the large-scale language model M, which is expressed as: />
In the third step, the inquiry is made: "given text X, the subject of the text may refer toThe object may mean->What the text is potentially looking at ", obtaining the potentially looking at text O output by the large-scale language model M, this step is expressed as: />
In the fourth step, the inquiry is made that: "given text X, the subject of the text may refer toThe object may mean->A potential perspective text O, whether the text expresses intent that meets the specification, please output yes or no "; this step is expressed as
2. The method for implicitly generating content compliance determination based on mental chain reasoning according to claim 1, wherein in the second step, the subject query step is performed first, and then the object query step is performed.
3. The method for implicitly generating content compliance decision based on mental chain reasoning as recited in claim 1, further comprising, after performing the fourth step, performing the steps of:
step five: and collecting the unknown safety text X, the subject S, the object T and the potential viewpoint text O, returning to the manual annotation platform, and performing reinforcement learning optimization of human feedback on the large model language model M.
4. The method for implicitly generating content compliance decision based on mental chain reasoning as recited in claim 1, further comprising, prior to performing step one, performing the steps of:
the preparation steps are as follows: legal and ethical knowledge involved in text security was first introduced by human feedback in a reinforcement learning manner.
CN202311192177.1A 2023-09-15 2023-09-15 Content compliance judging method based on thinking chain reasoning implicit generation Active CN116955539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311192177.1A CN116955539B (en) 2023-09-15 2023-09-15 Content compliance judging method based on thinking chain reasoning implicit generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311192177.1A CN116955539B (en) 2023-09-15 2023-09-15 Content compliance judging method based on thinking chain reasoning implicit generation

Publications (2)

Publication Number Publication Date
CN116955539A CN116955539A (en) 2023-10-27
CN116955539B true CN116955539B (en) 2023-12-12

Family

ID=88456776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311192177.1A Active CN116955539B (en) 2023-09-15 2023-09-15 Content compliance judging method based on thinking chain reasoning implicit generation

Country Status (1)

Country Link
CN (1) CN116955539B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116739110A (en) * 2023-06-21 2023-09-12 山东慧智博视数字科技有限公司 Large language model distillation method based on thinking chain

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3146673A1 (en) * 2021-01-25 2022-07-25 Royal Bank Of Canada System and method for natural languages processing with pretained languauage models

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116739110A (en) * 2023-06-21 2023-09-12 山东慧智博视数字科技有限公司 Large language model distillation method based on thinking chain

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models;Jason Wei等;《36th Conference on Neural Information Processing Systems(NeurIPS2022)》;第1-43页 *
人类思维模拟及其分层-协调结构模型;江建慧;上海铁道大学学报(02);33-41 *

Also Published As

Publication number Publication date
CN116955539A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
Malink Aristotle's modal syllogistic
CN106202476B (en) A kind of interactive method and device of knowledge based collection of illustrative plates
TWI536364B (en) Automatic speech recognition method and system
Misra et al. Using summarization to discover argument facets in online ideological dialog
CN107038229B (en) Use case extraction method based on natural semantic analysis
CN104933027A (en) Open Chinese entity relation extraction method using dependency analysis
CN103678275A (en) Two-level text similarity calculation method based on subjective and objective semantics
CN103744953A (en) Network hotspot mining method based on Chinese text emotion recognition
Ezhilarasi et al. Automatic emotion recognition and classification
CN106294324A (en) A kind of machine learning sentiment analysis device based on natural language parsing tree
CN110007611A (en) A kind of implicit collision detection method of smart home of knowledge based map
Sun et al. Multi-channel CNN based inner-attention for compound sentence relation classification
El Desouki et al. Exploring the recent trends of paraphrase detection
Walker et al. That’s your evidence?: Classifying stance in online political debate
CN116955539B (en) Content compliance judging method based on thinking chain reasoning implicit generation
Lei et al. Open domain question answering with character-level deep learning models
Samih et al. Enhanced sentiment analysis based on improved word embeddings and XGboost.
Vosoughi et al. A semi-automatic method for efficient detection of stories on social media
CN110827807B (en) Voice recognition method and system
Dong et al. DC-BiGRU-CNN Algorithm for Irony Recognition in Chinese Social Comments
Baracho et al. Sentiment analysis in social networks
CN111680493A (en) English text analysis method and device, readable storage medium and computer equipment
Baracho et al. Sentiment Analysis in Social Networks: a Study on Vehicles.
CN116821312B (en) Complex question-answering method based on discipline field knowledge graph
Sahin Classification of turkish semantic relation pairs using different sources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant