CN118171235A - Bimodal counterfeit information detection method based on large language model - Google Patents

Bimodal counterfeit information detection method based on large language model Download PDF

Info

Publication number
CN118171235A
CN118171235A CN202410562720.0A CN202410562720A CN118171235A CN 118171235 A CN118171235 A CN 118171235A CN 202410562720 A CN202410562720 A CN 202410562720A CN 118171235 A CN118171235 A CN 118171235A
Authority
CN
China
Prior art keywords
text
language model
image
information detection
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410562720.0A
Other languages
Chinese (zh)
Other versions
CN118171235B (en
Inventor
王茂林
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Original Assignee
Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kim Dai Intelligence Innovation Technology Co ltd filed Critical Shenzhen Kim Dai Intelligence Innovation Technology Co ltd
Priority to CN202410562720.0A priority Critical patent/CN118171235B/en
Publication of CN118171235A publication Critical patent/CN118171235A/en
Application granted granted Critical
Publication of CN118171235B publication Critical patent/CN118171235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bimodal counterfeit information detection method based on a large language model, which comprises the steps of S1, respectively encoding text features and image features to obtain the text encoded features and the image encoded features, and realizing alignment of a text mode and an image mode by calculating contrast learning loss; s2, generating text description and common sense through a large language model, carrying out inference usefulness assessment through the large language model through a news principle collaboration module, and generating text-text description interaction characteristics, text-common sense interaction characteristics, image-text description interaction characteristics and image-common sense interaction characteristics; and S3, the feature aggregation module distributes weights for the features after text coding, the features after image coding and the interaction features, and finally inputs the features into the classifier to finish final classification, so that the accuracy of false information detection is improved, and the problems of blocking illusion, lack of fact information, difficulty in adapting to new environments and the like are solved.

Description

Bimodal counterfeit information detection method based on large language model
Technical Field
The invention relates to a bimodal counterfeit information detection method based on a large language model.
Background
In the context of the widespread development of social media, individuals can freely share news information on the internet, making comments, which accelerates the generation and spread of false news. Although human forgery information detection experts can accurately verify the authenticity of news, they cannot process a large amount of information, and perform poorly on test set data that is different from the distribution of training set data. Therefore, researchers have turned to Large Language Model (LLM) counterfeit information detection techniques. Recent rise of LLMs such as GPT-3, GPT-4, instructGPT, etc., has demonstrated that LLMs have instruction-following capabilities, significant reasoning capabilities and generating capabilities. LLM has interpretability, generalizability and controllability in fake information detection, and is capable of acquiring knowledge from limited training data so as to accurately predict under unfamiliar data scenes. Many studies have utilized the understanding and reasoning capabilities of large LLMs to improve the accuracy of counterfeit information detection. In addition, LLM exhibits the ability to smooth and interpret given error messages based on natural language, while predicting their authenticity. However, large language models present challenges such as being hampered by illusions, lack of factual information, and difficulty in adapting to new environments. Additional speech contexts, such as emotion and standpoint, as well as external knowledge, aid in the detection of false information. Since LLM is a reason for not properly selecting and integrating LLM generation, it cannot completely replace Small Language Models (SLMs), but can be a mentor of SLMs for a instructive reason.
How to align text and image modes, how to use the reasoning capability of LLM to improve the false information detection accuracy in a few sample scene, how to learn the reasoning generated by LLM and the interaction information of the text and image modes, and select useful reasoning is the problem which is mainly solved by the design of the invention.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a bimodal counterfeit information detection method based on a large language model.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the method for detecting the bimodal counterfeit information based on the large language model is characterized by comprising the following steps of: comprises the following steps of
S1, respectively encoding text features and image features to obtain text encoded features and image encoded features, and realizing alignment of a text mode and an image mode by calculating contrast learning loss;
s2, generating text description through a large language model And common senseThrough the cooperation module pair of news principleAndPerforming interactive learning and reasoning usefulness assessment to generate text-text description interactive characteristics #) Text-common sense interactive characteristics) Image-text description interaction characteristics) Interaction characteristics with image-common sense);
S3, the characteristic aggregation module isWeights are distributed and finally input into a classifier to finish final classification.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: s1, performing text feature coding through Transforme models; image feature encoding is performed over Resnet networks.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: s1, setting the image-text pair asI represents the ith feature pair, N represents the number, and the ith feature pair predicts the text-image similarity=Image-text similarity=The calculation formula of (2) is as follows:
=
=
Related one-time heat coded labels =And=Text and image modality matching flag is 1, mismatch flag is 0, then cross entropy loss is calculated:
=-
=-
Final contrast learning loss The calculation formula of (2) is as follows:
=
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: the large language model in S2 is LLM large language model.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: s2 comprises
S21, news-reasoning interaction, namely introducing interaction learning of encouraging characteristics of a news-reasoning interactor with a bidirectional cross attention mechanism, wherein the cross attention mechanism is as follows:
CA(Q,K,V)=softmax(*/)
=*Q,=*K,= * V, d represents the feature dimension, for a given ,There is
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
S22, LLM judgment and prediction, respectively describing the textAnd common senseIs input into the multi-layer sensor and is output,AndRespectively representAndThe calculation results are as follows:
=sigmoid(MLP());
=sigmoid(MLP());
=-log-)log(1-);
=-log-)log(1-);
And Indicating a correct prediction of the text description and common sense,AndCross entropy loss representing textual description and common sense prediction;
S23, evaluating the usefulness of the product in In the usefulness assessment, the method is thatAndSplicing is input into a principle usefulness estimator parameterized by MLP, and the weights of which the usefulness is predicted are predictedAnd calculate the predictionCross entropy loss function of usefulness weightsThe process is as follows:
=sigmoid(MLP([]));
==-log-)log(1-);
At the position of In the usefulness assessment, the method is thatAndSplicing is input into a principle usefulness estimator parameterized by MLP, and the weights of which the usefulness is predicted are predictedAnd calculate the predictionCross entropy loss function of usefulness weightsThe process is as follows:
=sigmoid(MLP([]));
==-log-)log(1-);
then, the process is carried out, AndIs used to re-weight [],[],
[]=*[];
[]=*[]。
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: the feature aggregation module in S3 adopts a fuzzy learning method as followsWeights are assigned.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: s3, the feature aggregation module sets modal attention mechanism to readjust、[]、[Weight channel, first characterizing、[]、[Fusion, then using global pooling and max pooling operations, the initial weights obtained are then calculatedAndIs sent to a plurality of connection layers, and attention weight att= { is obtained through normalization based on GELU and SigmoidFinal fusion of feature vectorsExpressed as:
=*+*+*[]+*[]
final handle Inputting into classifier to finish final classification, and calculating cross entropy loss function of classification result
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: the connection layer is a4×4 full connection layer.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: the number of the connecting layers is two.
The bimodal counterfeit information detection method based on a large language model as described above is characterized in that: the normalization process is a GELU and Sigmoid based normalization process.
The beneficial effects of the invention are as follows:
According to the invention, the text mode and the image mode are aligned, the data understanding capability is enhanced, the model identification accuracy and robustness are improved, meanwhile, the understanding and reasoning capability of the LLM and the specific task feature extracting capability of the SLM are realized through the dominant interaction of the large predictive model LLM and the small language SLM, the false information detection accuracy is improved, and the problems of blocking illusion, lack of fact information, difficulty in adapting to new environments and the like are solved.
Drawings
FIG. 1 is a block diagram of the method of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly. Furthermore, the description of "preferred," "less preferred," and the like, herein is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "preferred", "less preferred" may include at least one such feature, either explicitly or implicitly.
As shown in FIG. 1, the method for detecting bimodal counterfeit information based on large language model comprises a contrast learning module, a news-reasoning cooperation module and a feature aggregation module, and comprises the following steps of
S1, respectively encoding text features and image features to obtain text encoded featuresAnd image-encoded featuresThe text mode and the image mode are aligned through calculation and comparison of learning loss;
s2, generating text description through a large language model And common senseThrough the cooperation module pair of news principleAndPerforming interactive learning and reasoning usefulness assessment to generate text-text description interactive characteristics #) Text-common sense interactive characteristics) Image-text description interaction characteristics) Interaction characteristics with image-common sense);
S3, the characteristic aggregation module isWeights are distributed and finally input into a classifier to finish final classification.
Wherein, the contrast learning module is used for designing contrast learning loss function and calculating contrast learning lossRealizing alignment among modes and forming aligned modesAnd. The news reasoning is generated by LLM and comprises text description #) And common sense). The news-reasoning collaboration module respectively carries out reasoning interactive learning and reasoning prediction on the text mode, the image mode and LLM, and calculates text description reasoning prediction loss #) And common sense reasoning prediction loss). The core of this module is to learn the interaction information between news modalities and reasoning and then adaptively select useful reasoning as a reference. At the same time withDuring the interaction of news-reasoning collaboration, firstAndSplicing, learning feature interaction by using a bidirectional cross-modal attention mechanism to obtain. Then to select the reasoning useful for the classification resultInput to an inferential usefulness evaluator, the output of the evaluator producing weightsIs given toObtaining final characteristics. And (3) withInteractive process and systemSimilarly, the final characteristics are obtained. The aggregation modules are respectively divided into the following parts by SE-Net networkWeights are assigned.
Specifically, in contrast learning of S1, the set is for a series of image-text pairsI represents the ith feature pair and N represents the number. Loss of contrast learningIs to achieve alignment of the image modality and the text modality. Ith feature pair predicted text-to-image similarity=Image-text similarity=The calculation formula of (2) is as follows:
=
=
Related one-time heat coded labels =And=If the text and image modalities match, the flag is 1 and the mismatch flag is 0. Then calculate the cross entropy loss:
=-
=-
Final contrast learning loss The calculation formula of (2) is as follows:
=
Specifically, in the news-reasoning collaboration of the S2, the news reasoning collaboration module comprises three parts, namely news-reasoning interaction, LLM judgment prediction and reasoning usefulness assessment. The advantages of LLM and SLM are mutually complemented, LLM has good analysis capability, and SLM has better capability of extracting specific task characteristics than LLM. The LLM promotes the accuracy of false information detection through the provided prompt. The news-reasoning interactive learning LLM generates reasoning and interactive information between news modes, and the advantages of the SLM and the LLM are fully combined. LLM judgment prediction is to judge the authenticity of news according to the reasoning generated by LLM. The inference usefulness assessment is to assess the contribution degree of different inferences and adjust the weights of the inferences for the subsequent fake information detection classification task, and specifically comprises
S21, news-reasoning interactions, where interactive learning of encouragement features of a news-reasoning interactor containing a bi-directional cross-attention mechanism is introduced, the cross-attention mechanism can be described as:
CA(Q,K,V)=softmax(*/)
=*Q,=*K,= * V, d represents the feature dimension. For a given set ,
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
=AvgPool(CA());
= AvgPool(CA());
S22, LLM judges and predicts, understand that the judgment implied by given reasoning is a condition for fully utilizing information behind reasoning. Respectively describe the textAnd common senseIs input into the multi-layer sensor and is output,AndRespectively representAndThe calculation results are as follows:
=sigmoid(MLP());
=sigmoid(MLP());
=-log-)log(1-);
=-log-)log(1-);
And Indicating a correct prediction of the text description and common sense,AndRepresenting the cross entropy loss of text description and common sense prediction.
S23, usability assessment, the reasons for different angles are different from each other in different news items, and improper integration may result in performance degradation. In order to enable the model to adaptively select the appropriate first principles, we devised a first principles usefulness assessment process in which we assess the contributions of the different first principles and adjust their weights for subsequent accuracy predictions. At the position ofIn the usefulness assessment, first, theAndSpliced together and input into a principle usefulness estimator parameterized by MLP, and the usefulness is predicted to obtain predicted weightAnd calculate the predictionCross entropy loss function of usefulness weightsThe process is as follows:
=sigmoid(MLP([]));
==-log-)log(1-);
At the position of In the usefulness assessment, the method is thatAndSplicing is input into a principle usefulness estimator parameterized by MLP, and the weights of which the usefulness is predicted are predictedAnd calculate the predictionCross entropy loss function of usefulness weightsThe process is as follows:
=sigmoid(MLP([]));
==-log-)log(1-);
Then AndIs used to re-weight [],[]。
[]=*[];
[]=*[]。
Specifically, in the feature aggregation of S3, the feature aggregation module is a feature,[],[Assigning appropriate weights and then fusing the features into a fused feature vector. The feature aggregation part designs a modal attention mechanism to readjust,[],[Weighted channels. First, these features are fused together, then global pooling and max pooling operations are used, and then the initial weights obtained in the previous step are transferred into two 4 x4 fully connected layers. By normalization based on GELU and Sigmoid, we obtain the attention weight att= {Final (final)Expressed as:
=*+*+*[]+*[];
final handle Inputting into classifier to finish final classification, and calculating cross entropy loss function of classification result
The foregoing description of the preferred embodiments of the present invention should not be construed as limiting the scope of the invention, but rather should be understood to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following description and drawings or any application directly or indirectly to other relevant art(s).

Claims (10)

1. The method for detecting the bimodal counterfeit information based on the large language model is characterized by comprising the following steps of: comprises the following steps of
S1, respectively encoding text features and image features to obtain text encoded featuresAnd image-encoded features/>The text mode and the image mode are aligned through calculation and comparison of learning loss;
s2, generating text description through a large language model And common sense/>Collaboration module pair/>, through news principle、/>And/>Performing interactive learning and performing inference usefulness assessment to generate text-text description interactive features/>Text-common sense interaction feature/>Image-text description interaction feature/>And image-common sense interaction feature/>
S3, the characteristic aggregation module is、/>、/>、/>、/>、/>Weights are distributed and finally input into a classifier to finish final classification.
2. The large language model based bimodal counterfeit information detection method of claim 1, wherein: s1, performing text feature coding through Transforme models; image feature encoding is performed over Resnet networks.
3. The large language model based bimodal counterfeit information detection method of claim 1, wherein: s1, setting the image-text pair asI represents the ith feature pair, N represents the number, and the ith feature pair predicts text-image similarity/>=/>Image-text similarity/>=/>The calculation formula of (2) is as follows:
=/>
=/>
Related one-time heat coded labels =/>And/>=/>Text and image modality matching flag is 1, mismatch flag is 0, then cross entropy loss is calculated:
=-/>
=-/>
Final contrast learning loss The calculation formula of (2) is as follows:
=/>
4. The large language model based bimodal counterfeit information detection method of claim 1, wherein: the large language model in S2 is LLM large language model.
5. The large language model based bimodal counterfeit information detection method of claim 3, wherein: s2 comprises
S21, news-reasoning interaction, namely introducing interaction learning of encouraging characteristics of a news-reasoning interactor with a bidirectional cross attention mechanism, wherein the cross attention mechanism is as follows:
CA(Q,K,V)=softmax(*/>//>)/>
=/>*Q,/>=/>*K,/>=/> * V, d represents the feature dimension, for a given/> ,/>,/>,/>There is
=AvgPool(CA(/>,/>,/>));
= AvgPool(CA(/>,/>,/>));
=AvgPool(CA(/>,/>,/>));
= AvgPool(CA(/>,/>,/>));
=AvgPool(CA(/>,/>,/>));
= AvgPool(CA(/>,/>,/>));
=AvgPool(CA(/>,/>,/>));
= AvgPool(CA(/>,/>,/>));
S22, LLM judgment and prediction, respectively describing the textAnd common sense/>Input into a multilayer perceptron,/>And/>Respectively representAnd/>The calculation results are as follows:
=sigmoid(MLP(/>));
=sigmoid(MLP(/>));
=-/>log/>-/>)log(1-/>);
=-/>log/>-/>)log(1-/>);
And/> Representing a correct prediction of the text description and common sense,/>And/>Cross entropy loss representing textual description and common sense prediction;
S23, evaluating the usefulness of the product in In the usefulness assessment, will/>And/>Splicing is input into a principle usefulness estimator parameterized by MLP, and the predicted usefulness is predicted by the weight/>And calculates the prediction/>Cross entropy loss function of usefulness weights/>The process is as follows:
=sigmoid(MLP([/>,/>]));
=/>=-/>log/>-/>)log(1-/>);
At the position of In the usefulness assessment, will/>And/>Splicing is input into a principle usefulness estimator parameterized by MLP, and the predicted usefulness is predicted by the weight/>And calculates the prediction/>Cross entropy loss function of usefulness weights/>The process is as follows:
=sigmoid(MLP([/>,/>]));
=/>=-/>log/>-/>)log(1-/>);
then, the process is carried out, And/>Is used to re-weight [/>,/>],[/>,/>],
[,/>]=/>*[/>,/>];
[,/>]=/>*[/>,/>]。
6. The large language model based bimodal counterfeit information detection method of claim 1, wherein: the feature aggregation module in S3 adopts a fuzzy learning method as follows、/>、/>、/>,/>、/>Weights are assigned.
7. The large language model based bimodal counterfeit information detection method of claim 5, wherein: s3, the feature aggregation module sets modal attention mechanism to readjust、/>、[/>,/>]、[/>,/>Weight channel, feature/>、/>、[/>,/>]、[/>,/>Fusion, then using global pooling and max pooling operations, the initial weights obtained/>And/>Sent to multiple connection layers, attention weight att= {/>, obtained by normalization based on GELU and Sigmoid,/>,/>,/>Final fusion feature vector/>Expressed as:
=/>*/>+/>*/>+/>*[/>,/>]+/>*[/>,/>];
final handle Inputting into a classifier, completing final classification, and calculating the cross entropy loss function/>, of the classification result
8. The large language model based bimodal counterfeit information detection method of claim 7, wherein: the connection layer is a 4×4 full connection layer.
9. The large language model based bimodal counterfeit information detection method of claim 7, wherein: the number of the connecting layers is two.
10. The large language model based bimodal counterfeit information detection method of claim 7, wherein: the normalization process is a GELU and Sigmoid based normalization process.
CN202410562720.0A 2024-05-08 2024-05-08 Bimodal counterfeit information detection method based on large language model Active CN118171235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410562720.0A CN118171235B (en) 2024-05-08 2024-05-08 Bimodal counterfeit information detection method based on large language model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410562720.0A CN118171235B (en) 2024-05-08 2024-05-08 Bimodal counterfeit information detection method based on large language model

Publications (2)

Publication Number Publication Date
CN118171235A true CN118171235A (en) 2024-06-11
CN118171235B CN118171235B (en) 2024-07-26

Family

ID=91350690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410562720.0A Active CN118171235B (en) 2024-05-08 2024-05-08 Bimodal counterfeit information detection method based on large language model

Country Status (1)

Country Link
CN (1) CN118171235B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964482A (en) * 2022-05-24 2023-04-14 西北工业大学 Multi-mode false news detection method based on user cognitive consistency reasoning
CN117251795A (en) * 2023-10-10 2023-12-19 天津理工大学 Multi-mode false news detection method based on self-adaptive fusion
CN117271768A (en) * 2023-09-19 2023-12-22 中国科学院计算技术研究所 False news detection method and device based on large language model analysis and guidance
CN117577119A (en) * 2024-01-17 2024-02-20 清华大学 Fake voice detection method, system, equipment and medium integrating large language model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115964482A (en) * 2022-05-24 2023-04-14 西北工业大学 Multi-mode false news detection method based on user cognitive consistency reasoning
CN117271768A (en) * 2023-09-19 2023-12-22 中国科学院计算技术研究所 False news detection method and device based on large language model analysis and guidance
CN117251795A (en) * 2023-10-10 2023-12-19 天津理工大学 Multi-mode false news detection method based on self-adaptive fusion
CN117577119A (en) * 2024-01-17 2024-02-20 清华大学 Fake voice detection method, system, equipment and medium integrating large language model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LONGZHENG WANG 等: "MMIDR: Teaching Large Language Model to Interpret Multimodal Misinformation via Knowledge Distillation", ARXIV:2403.14171, 21 March 2024 (2024-03-21), pages 1 - 10 *

Also Published As

Publication number Publication date
CN118171235B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
Verma et al. Modified convolutional neural network architecture analysis for facial emotion recognition
CN114387567B (en) Video data processing method and device, electronic equipment and storage medium
CN109544306A (en) A kind of cross-cutting recommended method and device based on user behavior sequence signature
CN112380835B (en) Question answer extraction method integrating entity and sentence reasoning information and electronic device
CN114492407B (en) News comment generation method, system, equipment and storage medium
CN111538841B (en) Comment emotion analysis method, device and system based on knowledge mutual distillation
CN114021524B (en) Emotion recognition method, device, equipment and readable storage medium
CN112256859A (en) Recommendation method based on bidirectional long-short term memory network explicit information coupling analysis
CN116975776A (en) Multi-mode data fusion method and device based on tensor and mutual information
CN112820320A (en) Cross-modal attention consistency network self-supervision learning method
CN116579347A (en) Comment text emotion analysis method, system, equipment and medium based on dynamic semantic feature fusion
CN113963200A (en) Modal data fusion processing method, device, equipment and storage medium
CN115270004A (en) Education resource recommendation method based on field factor decomposition
JP7563842B2 (en) Neural network operation and learning method and said neural network
CN115270807A (en) Method, device and equipment for judging emotional tendency of network user and storage medium
CN113268592B (en) Short text object emotion classification method based on multi-level interactive attention mechanism
CN113849725B (en) Socialized recommendation method and system based on graph attention confrontation network
CN118171235B (en) Bimodal counterfeit information detection method based on large language model
CN113642630A (en) Image description method and system based on dual-path characteristic encoder
Jenny Li et al. Evaluating deep learning biases based on grey-box testing results
CN117437467A (en) Model training method and device, electronic equipment and storage medium
CN117216223A (en) Dialogue text generation method and device, storage medium and electronic equipment
CN115620342A (en) Cross-modal pedestrian re-identification method, system and computer
CN109166118A (en) Fabric surface attribute detection method, device and computer equipment
CN114818613A (en) Dialogue management model construction method based on deep reinforcement learning A3C algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant