CN117312562A - Training method, device, equipment and storage medium of content auditing model - Google Patents

Training method, device, equipment and storage medium of content auditing model Download PDF

Info

Publication number
CN117312562A
CN117312562A CN202311286453.0A CN202311286453A CN117312562A CN 117312562 A CN117312562 A CN 117312562A CN 202311286453 A CN202311286453 A CN 202311286453A CN 117312562 A CN117312562 A CN 117312562A
Authority
CN
China
Prior art keywords
content
model
sample text
checking
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311286453.0A
Other languages
Chinese (zh)
Inventor
吴秉哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311286453.0A priority Critical patent/CN117312562A/en
Publication of CN117312562A publication Critical patent/CN117312562A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a training method, device and equipment of a content auditing model and a storage medium, and belongs to the field of content auditing. The method comprises the following steps: acquiring content characteristics of a plurality of sample texts and an audit tag of each sample text; generating a first checking process of the sample text according to the content characteristics of the sample text through a generating model, wherein the first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not; sampling the content of the first checking process for a plurality of times to obtain a plurality of sampling results of the first checking process; and a second checking process of checking consistency of the first checking process and a plurality of sampling results of the first checking process to screen out the sample text; and training a content auditing model through a second auditing process of the sample text. The method and the device can eliminate errors and 'illusions' when the generation model generates training data, so that the reliability of the trained content auditing model is improved.

Description

Training method, device, equipment and storage medium of content auditing model
Technical Field
The present invention relates to the field of content auditing, and in particular, to a training method, device, equipment and storage medium for a content auditing model.
Background
With the massive increase of user generated content, the demand for content auditing by various online platforms is also increasing. Manual auditing can not meet the auditing requirements of mass contents, and an automatic content auditing model is generally constructed for content auditing at present.
In the related art, a large amount of marked training data is needed to train the content auditing model in the process of constructing the content auditing model, and zero sample learning of the content auditing model is difficult to achieve. The generation model may thus be employed to generate training data for training the content audit model, e.g., using a large language model.
The training data generated by the large language model is easy to generate errors and 'illusions', wherein the illusions refer to that the generated training data is in accordance with logic, but actual errors or false contents which are not in accordance with the actual, and the reliability of the content auditing model is lower.
Disclosure of Invention
The application provides a training method, device, equipment and storage medium for a content auditing model, which can improve the reliability of the trained content auditing model. The technical scheme is as follows:
according to an aspect of the present application, there is provided a training method of a content audit model, the method including:
Acquiring content characteristics of a plurality of sample texts and audit labels of each sample text, wherein the audit labels are used for reflecting whether the content of the sample text is compliant or not;
generating a first checking process of the sample text according to the content characteristics of the sample text through a generating model, wherein the first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not;
sampling the content of the first checking process for a plurality of times to obtain a plurality of sampling results of the first checking process; and a second checking process of checking consistency of the first checking process and the sampling results of the first checking process to screen out the sample text;
and training the content auditing model through a second auditing process of the sample text, wherein the content auditing model is used for predicting labels of the text to be predicted according to content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not.
According to another aspect of the present application, there is provided a training apparatus of a content audit model, the apparatus including:
the system comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring content characteristics of a plurality of sample texts and verification tags of each sample text, and the verification tags are used for reflecting whether the content of the sample text is compliant or not;
The generation module is used for generating a first checking process of the sample text according to the content characteristics of the sample text through a generation model, wherein the first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not;
the verification module is used for sampling the content of the first checking process for a plurality of times to obtain a plurality of sampling results of the first checking process; and a second checking process of checking consistency of the first checking process and the sampling results of the first checking process to screen out the sample text;
the training module is used for training the content auditing model through a second auditing process of the sample text, wherein the content auditing model is used for predicting labels of the text to be predicted according to content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not.
In an optional design, the generating module is configured to generate, by using the generating model, a problem corresponding to the first review process according to the first review process, where the problem is related to the content of the first review process; generating a first answer according to the first examination process and the question through the generation model; performing multiple content sampling on the first checking process through the generation model to obtain multiple sampling results of the first checking process; generating a second answer of each sampling result according to each sampling result of the first checking process and the questions through the generating model;
And the verification module is used for carrying out consistency verification on the first answer of the first verification process and the second answer of each sampling result so as to screen out a second verification process of the sample text.
In an alternative design, the verification module is configured to determine a consistency ratio of the first answer to the first review process and the second answer to each of the sampling results; and removing the first checking processes with the consistency rate lower than a consistency rate threshold value from all the first checking processes, and determining the rest first checking processes as the second checking processes.
In an optional design, the generating module is configured to input content features of the sample text and first guide information into the generating model to obtain a third review process of the sample text, where the first guide information is used to guide the generating model to generate the third review process of the sample text according to the content features of the sample text;
the device also comprises a determining module, wherein the determining module is used for determining a first checking process of the sample text according to a third checking process of the sample text.
In an optional design, the generating module is configured to input the content feature of the sample text, the third review process of the sample text, and second guide information into the generating model, so as to obtain a first prediction tag of the sample text, where the first prediction tag is used to predict whether the content of the sample text is compliant, and the second guide information is used to guide the generating model to generate the first prediction tag of the sample text according to the content feature of the sample text and the third review process of the sample text;
And the determining module is used for determining the third checking process of the sample text as the first checking process of the sample text under the condition that the first prediction tag is consistent with the checking tag.
In an alternative design, the obtaining module is configured to obtain third guidance information, where the first prediction tag is inconsistent with the audit tag, where the third guidance information is obtained by modifying the first guidance information;
the generation module is used for inputting the content characteristics of the sample text and the third guide information into the generation model so as to regenerate a third checking process of the sample text;
and the determining module is used for determining a first checking process of the sample text according to the regenerated third checking process of the sample text.
In an alternative design, the generating module is configured to input the audit tag of the sample text into the generating model during a third audit process of regenerating the sample text.
In an alternative design, the training module is configured to train the content audit model according to content features of the sample text, the audit tag, and the second audit process.
In an alternative design, the training module is configured to input the content features of the sample text and the second review process of the sample text into the content review model to obtain a second prediction tag of the sample text; determining error loss according to the error between the second prediction label of the sample text and the audit label of the sample text; and training the content auditing model through the error loss.
According to another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one program that is loaded and executed by the processor to implement the training method of the content audit model as described in the above aspect.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one program loaded and executed by a processor to implement a training method of a content audit model as described in the above aspect.
According to another aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the training method of the content audit model provided in various alternative implementations of the above aspects.
The beneficial effects that this application provided technical scheme brought include at least:
the first checking process of the sample text is generated, and the consistency check processing is carried out on the generated first checking process by combining the content sampling result of the first checking process, so that the training data which are aligned and have higher accuracy, namely the second checking process, can be generated and screened. Since if the first audit process and its content sampling result cannot pass the consistency check, it is likely that the generated first audit process has errors or "illusions". Therefore, consistency verification is carried out on the generated data, so that the data used by model training is screened out, and errors and 'illusions' when the generated model generates the data are eliminated. The content auditing model is trained through the generated second auditing process with higher accuracy, so that the reliability of the content auditing model can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a process for training a content audit model provided in one exemplary embodiment of the present application;
FIG. 3 is a flow chart of a sound source separation method provided in an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a sound source separation method provided in an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a machine learning model provided in an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of an encoder provided in an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of a decoder provided in an exemplary embodiment of the present application;
FIG. 8 is a flow chart of a method for training an information flow audit model provided in an exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of a training device for a content audit model according to an exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of a training device for a content review model according to an exemplary embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, the related terms referred to in this application are described:
artificial intelligence (Artificial Intelligence, AI): the system is a theory, a method, a technology and an application system which simulate, extend and extend human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include, for example, sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, pre-training model technologies, operation/interaction systems, mechatronics, and the like. The pre-training model is also called a large model and a basic model, and can be widely applied to all large-direction downstream tasks of artificial intelligence after fine adjustment. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Natural language processing (Nature Language processing, NLP): is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing involves natural language, i.e., the language that people use daily, closely with linguistic research, and simultaneously involves computer science and mathematics, and is an important technology for model training in the artificial intelligence field, for example, a pre-training model, i.e., developed from a large language model (Large Language Model, LLM) in the NLP field. Through fine tuning, the large language model can be widely applied to downstream tasks. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like. The generative model in this application supports natural language processing.
Machine Learning (ML): is a multi-domain interdisciplinary, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like. The pre-training model is the latest development result of deep learning, and integrates the technology.
Pre-training model (Pre-training model): the model is also called a basic stone model and a large model, which refer to a deep neural network (Deep neural network, DNN) with large parameters, the deep neural network is trained on massive unlabeled data, common characteristics are extracted from the data by utilizing the function approximation capability of the large-parameter DNN, and the model is suitable for downstream tasks through technologies such as fine tuning, efficient fine tuning (PEFT) of parameters, prompt-tuning (a model fine tuning method) and the like. Therefore, the pre-training model can achieve ideal effects in a small sample (Few-shot) or Zero sample (Zero-shot) scene. The pre-trained models can be categorized according to the data modality of processing into language models (e.g., ELMO, BERT, GPT), visual models (e.g., swin-transformer, viT, V-MOE), speech models (e.g., VALL-E), multi-modal models (e.g., viBERT, CLIP, flamingo, gato), etc., where multi-modal models refer to models that build a representation of two or more data modality features. The pre-training model is an important tool for outputting Artificial Intelligence Generation Content (AIGC), and can also be used as a general interface for connecting a plurality of specific task models. The generative model in this application may be considered as a pre-training model.
Large language model (Large Language Model, LLM): is an artificial intelligence model aimed at understanding and generating human language. They train on a large amount of text data and can perform a wide range of tasks including text summarization, translation, emotion analysis, and so forth. LLMs are characterized by a large scale, containing billions of parameters, which help them learn complex patterns in linguistic data. In some embodiments, the generative model in the present application is a large language model.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system includes a terminal 110, a first server 120 and a second server 130, and the terminal 110, the first server 120 and the second server 130 are connected through a communication network 140. In some embodiments, a client supporting the user to input text is deployed in the terminal 110, where the client is a stand-alone client, or is an applet that depends on the host program to run, or is a web page, and the embodiments of the present application are not limited in this regard. In some embodiments, the client includes an instant messaging client, a video client, a social client, a financial client, an online shopping client, a music client, a takeaway client, an office client, a game client, a map client, a traffic client, a navigation client, and the like, through which the terminal 110 communicates with the first server 120.
It should be noted that the numbers of the terminal 110 and the first server 120 in fig. 1 are only used as examples, and are not limiting on the structure of the computer system provided in the embodiments of the present application. It is understood that the first server 120 may be connected to a plurality of terminals 110.
In some embodiments, the terminal 110 is configured to send text to be predicted to the first server 120, where the text to be predicted is text input by a user, including manually input text or text generated by a machine learning model, which is not limited in this embodiment of the present application. In some embodiments, the terminal 110 sends the text to be predicted to the first server 120 in case of triggering a determination of whether the content of the text entered by the user is compliant.
The process of predicting the label of the text to be predicted may be performed by the terminal 110 alone, may be performed by the first server 120, or may be performed by the terminal 110 and the first server 120 through data interaction, which is not limited in the embodiment of the present application. Illustratively, the first server 120 predicts the label of the text to be predicted through the content audit model 121. In some embodiments, the content auditing model 121 is deployed in the first server 120. In some embodiments, the content audit model 121 is not deployed in the first server 120, and the first server 120 inputs information to the content audit model 121 and obtains information output by the content audit model 121 through an application programming interface (Application Programming Interface, API) corresponding to the content audit model 121. In some embodiments, the content audit model 121 is a self-building and training model. In some embodiments, the content audit model 121 is a published pre-trained model. In some embodiments, the content audit model 121 is a large language model, or the content audit model 121 is a discriminant model or a classification model. The first server 120 can obtain a tag of the text to be predicted, which is used to predict whether the content of the text to be predicted is compliant, by inputting the text to be predicted into the content audit model 121. Alternatively, the first server 120 sends the tag of the text to be predicted to the terminal 110, and the terminal 110 performs processing in the case of compliance or processing in the case of non-compliance on the text to be predicted, for example, deletes the text to be predicted or issues a warning, etc.
The process of training the content audit model 121 is performed by training data provided by the second server 130. In some embodiments, the second server 130 is provided separately from the first server 120. In some embodiments, the second server 130 is implemented as the same server as the first server 120. The second server 130 can invoke a generation model 131, the generation model 131 being used to generate output information matching the input information. In some embodiments, generative model 131 is deployed in second server 130. In some embodiments, the generative model 131 is not deployed in the second server 130, and the second server 130 inputs information to the generative model 131 through an API corresponding to the generative model 131, and obtains information output by the generative model 131. In some embodiments, the generative model 131 is a self-building and trained model. In some embodiments, the generative model 131 is a published pre-trained model. In some embodiments, the generative model 131 is a large language model. The second server 130 can generate a first audit process of the sample text conforming to the audit tag of the sample text by generating a model 131 based on the content characteristics of the plurality of sample texts and the audit tag of each sample text, the first audit process being used to express a process of reasoning about whether the content of the sample text is compliant. And performing consistency check on the generated first verification process based on a plurality of content sampling results of the first verification process generated by the generation model 131, so as to screen out a second verification process, and taking the second verification process, the content characteristics of the sample text and the verification tag of the sample text as training data. The content audit model 121 is trained by aligning the generated data with the original data and performing a consistency check process on the generated data to help eliminate errors and "illusions" in generating the data by the generation model 131, thereby generating training data with higher accuracy.
It should be noted that the above-mentioned terminals include, but are not limited to, mobile terminals such as mobile phones, tablet computers, portable laptop computers, intelligent voice interaction devices, intelligent home appliances, vehicle-mounted terminals, aircrafts, and the like, and may also be implemented as desktop computers and the like; the first server and the second server may be independent physical servers, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be cloud servers for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligence platforms.
Cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, application programs, networks and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required.
In some embodiments, the first server and the second server may be implemented as nodes in a blockchain system.
FIG. 2 is a schematic diagram of a process for training a content audit model provided in an exemplary embodiment of the present application. As shown in fig. 2, the computer device obtains a first data set 201, the first data set 201 including content features of a plurality of sample texts and an audit tag for each sample text, the audit tag being for reflecting whether the content of the sample text is compliant. In some embodiments, the audit tag is manually labeled. The computer device then generates alignment training data by the following steps to obtain the second data set 202.
Generating an auditing process: the computer device generates a third review process of the sample text from the content features of the sample text via the generation model 203, the third review process being used to express the process by which the generation model 202 infers whether the content of the sample text is compliant.
And (3) label consistency processing in the auditing process: the computer device generates a first prediction tag of the sample text according to the content characteristics of the sample text and a third checking process of the generated sample text through the generation model 203, and the computer device determines the third checking process as the first checking process under the condition that the generated first prediction tag is consistent with the checking tag. And under the condition that the generated first prediction label is inconsistent with the auditing label, the computer equipment acquires the modified guide information, regenerates a third auditing process of the sample text according to the modified guide information, and then executes the judgment and the processing again. The guidance information is information input to the generation model 203 at the time of the third review process of generating the sample text by the generation model 203, and is used to guide the generation model to generate the third review process of the sample text according to the content characteristics of the sample text.
Audit process illusion cancellation: the computer device generates a question corresponding to the first review process from the first review process of the sample text through the generation model 203, the question being related to the content of the first review process. And generates a first answer from the first review process and the question via the generation model 203. And then, sampling the content of the first checking process for a plurality of times through the generation model 203, so as to obtain a plurality of sampling results of the first checking process, wherein the sampling results are used for expressing the content of the first checking process. The computer device then generates a second answer to each sample result according to each sample result of the first review process and the previously generated questions by the generation model 203, and performs consistency check on the first answer of the first review process and the second answer of each sample result, so as to screen out the second review process of the sample text meeting the consistency requirement from all the first review processes, thereby eliminating errors and "illusions" in the generation of the review process. If the first verification process and the content sampling result cannot pass the consistency check, the generated first verification process is likely to have errors or be caused by illusion. Therefore, consistency verification is carried out on the generated data, so that the data used by model training is screened out, and errors and 'illusions' when the generated model generates the data are eliminated.
After the second review process of the sample text is obtained, the computer device merges it with the first data set 201 to obtain a second data set 202, so as to train the content review model 204 according to the content features, the review tags and the second review process of the sample text in the second data set 202.
The first checking process of the sample text is generated, and the consistency check processing is carried out on the generated first checking process by combining the content sampling result of the first checking process, so that the training data which are aligned and have higher accuracy, namely the second checking process, can be generated and screened. Performing a consistency check process on the generated data helps to eliminate errors and "illusions" when the generated model generates the data. The content auditing model is trained through the generated second auditing process with higher accuracy, so that the reliability of the content auditing model can be improved.
FIG. 3 is a flow chart of a method for training a content audit model according to an exemplary embodiment of the present application. The method may be used with a computer device or a client on a computer device. As shown in fig. 3, the method includes:
step 302: content characteristics of a plurality of sample texts and an audit tag of each sample text are obtained.
The content characteristics of the sample text are used to reflect the content of the sample text. Optionally, the content feature of the sample text refers to text data of the sample text. The audit tag is used to reflect whether the content of the sample text is compliant. Illustratively, whether the content is compliant includes compliance with regulations of the relevant region, compliance with requirements set by the platform for the content entered by the user, and the like, such as being harmful information that is detrimental to mental health.
The computer device obtains the content characteristics of the sample text through the local data, or obtains the content characteristics of the sample text through other computer devices connected with the computer device, or obtains the content characteristics of the sample text through the input data. Optionally, the audit tag is manually labeled with the sample text based on the content characteristics of the sample text. For example, an audit tag of 0 indicates normal content and an audit tag of 1 indicates offending content.
Optionally, after obtaining the content characteristics of the plurality of sample texts and the audit tag for each sample text, the computer device will collect in the first data set.
Step 304: and generating a first check process of the sample text according to the content characteristics of the sample text through a generation model.
The generation model is used to generate output information that matches the input information. In some embodiments, the generative model comprises a large language model. Optionally, when inputting the input information into the generating model to generate the corresponding output information, guiding information (also referred to as prompt) needs to be input into the generating model, where the guiding information is used to guide the generating model to output the output information matched with the input information and meeting the requirement of the guiding information according to the input information, and the guiding information is input together with or separately from the input information.
In some embodiments, the generative model is deployed in a computer device that directly invokes the generative model. In some embodiments, the generative model is not deployed in a computer device, which generates the model by API calls corresponding to the generative model. In some embodiments, the generative model is a model that is built and trained by the developer. In some embodiments, the generative model is a published pre-trained model.
The first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not. For example, a process for expressing whether the generation model infers (thinks) whether the content of the sample text is compliant may employ natural language descriptions. It should be noted that, the auditing process in the embodiments of the present application may also be referred to as an auditing process description.
Optionally, the computer device generates a third review process of the sample text from the content features of the sample text by generating a model and determines it as the first review process of the sample text. Or after the third checking process of the sample text is generated, the computer equipment performs label consistency check on the sample text, if the check is not passed, the third checking process of the sample text is regenerated, and the judging and processing processes are executed again, so that the first checking process of the sample text is obtained. The label consistency test refers to predicting a first prediction label of the sample text according to the third checking process and the content characteristics of the corresponding sample text, and judging whether the first prediction label is consistent with the checking label of the sample text, wherein the first prediction label is used for predicting whether the content of the sample text is compliant. Optionally, the first predictive label is generated by generating a model.
Optionally, the computer device may input the first guidance information to the generation model when generating the third review process. Optionally, the first guidance information is manually set. When setting the first guidance information, reference may be made to the following: (1) Prompt model role positioning, e.g. "you are a content auditor"; (2) An audit target is described, such as "please audit whether the following is illegal"; (3) The output sample format is provided, for example, "please give judgment reason sentence by sentence". For example, the first guiding information is "you are a content auditor, a section of content to be audited is provided below, please judge whether the content is illegal or not sentence by sentence, and give out judgment reason for you", and the auditing process generated by the generating model is "sentence 1: no illegal because xx; sentence 2, violation because xx).
Step 306: and carrying out content sampling for a plurality of times on the first checking process to obtain a plurality of sampling results of the first checking process, and carrying out consistency check on the first checking process and the plurality of sampling results of the first checking process to screen out a second checking process of the sample text.
The sampling result of the first review process is related to the content of the first review process, for example, to reflect the content of the first review process. For example, the sampling result is used to summarize the content of the first checking process, or extract the main content of the first checking process, or express the content of the first checking process in other expression modes, and the specific content of the sampling result is not limited in the embodiment of the present application. Optionally, each of the plurality of sampling results is different. In some embodiments, the computer device performs the multiple content samples of the first review process by generating a model.
The consistency check of the first checking process and the plurality of sampling results of the first checking process can be regarded as checking whether the content expressed by the first checking process is consistent with the content expressed by the plurality of sampling results. If not, the first review process generated is likely to have errors or "illusions". If so, the generated first review process is likely to be aligned and accurate with the input information, so that the first review process can be used as a second review process for subsequent model training in this case.
Optionally, the computer device generates a question related to the content of the first review process according to the first review process and generates the first answer based on the question and the first review process. And then generating a second answer of each sampling result according to each sampling result and the question, and checking the consistency of the first answer and the second answer by checking the consistency ratio of the first answer and the second answer, thereby realizing the consistency check of the first checking process and further screening out the second checking process. Optionally, the generating process is implemented by generating a model. For example, the computer device generates a question q for asking its content according to the first review process, and generates an answer a to the question q based on the first review process. Then generating answers of the questions q based on the sampling results, and executing on each sampling result to obtain answers a corresponding to different sampling results 1 -a j J is the number of sampling results. By checking answer a and answer a 1 -a j To achieve the consistency ofAnd checking the consistency of the first checking process to screen out the second checking process.
Step 308: and training a content auditing model through a second auditing process of the sample text.
The content auditing model is used for predicting labels of the text to be predicted according to the content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not. The text to be predicted is text entered by a user, including manually entered text or text generated by a machine learning model, to which embodiments of the present application are not limited. In some embodiments, the content audit model is a large language model or the content audit model is a discriminant model or a classification model.
Optionally, the computer device trains the content audit model based on the content characteristics of the sample text, the audit tag, and the second audit process. Optionally, in the process of training the content auditing model, the computer device inputs the content characteristics of the sample text and the second auditing process of the sample text into the content auditing model to obtain the second prediction tag of the sample text. And determining error loss according to the errors of the second prediction label of the sample text and the auditing label of the sample text, and training a content auditing model through the error loss.
Optionally, after the second review process of the sample text is obtained, the computer device merges the sample text with the first data set to obtain a second data set, and trains the content review model through the second data set.
In some embodiments, the sample text and the text to be predicted are text in the field of map internet of vehicles, for example, text input by a user in a map client, including text input by the user and reflecting road traffic conditions, text communicated by the user with other users, text that the user asks or describes for a place in a map, and the like. Or text input by the user in the vehicle-mounted terminal, including text for asking questions or searching, text for communicating with other users, text for posting comments by the user, and the like. The embodiment of the application does not limit the content type of the text.
In summary, according to the method provided by the embodiment, through generating the first checking process of the sample text and performing the consistency check processing on the generated first checking process by combining the content sampling result of the first checking process, the aligned training data with higher accuracy, namely the second checking process, can be generated and screened. Since if the first audit process and its content sampling result cannot pass the consistency check, it is likely that the generated first audit process has errors or "illusions". Therefore, consistency verification is carried out on the generated data, so that the data used by model training is screened out, and errors and 'illusions' when the generated model generates the data are eliminated. The content auditing model is trained through the generated second auditing process with higher accuracy, so that the reliability of the content auditing model can be improved.
FIG. 4 is a flow chart of a method for training a content audit model according to an exemplary embodiment of the present application. The method may be used with a computer device or a client on a computer device. As shown in fig. 4, the method includes:
step 402: content characteristics of a plurality of sample texts and an audit tag of each sample text are obtained.
The content characteristics of the sample text are used to reflect the content of the sample text. Optionally, the content feature of the sample text refers to text data of the sample text. The audit tag is used to reflect whether the content of the sample text is compliant. Illustratively, whether the content is compliant includes whether it meets the requirements of national or local laws, regulations, whether it meets the requirements set by the platform for the content entered by the user, and so forth.
Step 404: and generating a first check process of the sample text according to the content characteristics of the sample text through a generation model.
The generation model is used to generate output information that matches the input information. In some embodiments, the generative model comprises a large language model. The first checking process is used for expressing the process of reasoning whether the content of the sample text is compliant or not, and natural language description can be adopted.
Optionally, the computer device inputs the content features of the sample text and the first guide information into the generation model, so as to obtain a third checking process of the sample text, wherein the third checking process is used for expressing a process of reasoning whether the content of the sample text is compliant. The first guide information is used for guiding the generation model to generate a third checking process of the sample text according to the content characteristics of the sample text. Optionally, the first guidance information is manually set. The computer device may then determine a first review process of the sample text based on the third review process of the sample text.
In the process of generating the third checking process through the generation model, the probability of the prediction result of the next time step is predicted at the current time step, and then the prediction result of each time step is searched based on the probability through beam search (beam search), so that the finally generated third checking process is obtained in a summarizing mode. Illustratively, at time step t, a model G is generated θ The probability of predicting the next prediction result may be expressed as P (a i,t ∣a i,1:t-1 ,c i )=G θ (a i,1:t-1 ,a i ) Where P represents probability, c i Representing content characteristics of the ith sample text, a i,t Representing the predicted result of time step t, a i A third review process representing the ith sample text. The bundle search is a search algorithm for text generation, which aims at finding the one with the highest probability among possible output sequences, and the final generated sequence, i.e. the third review process, can be obtained by using the bundle search. The beam search can calculate the cumulative probability of all possible sequences up to the current time step t by using the conditional probability, and can reserve the first K sequences with the highest probability as candidates, and then iterate the above process in the subsequent time steps until the output sequence with the highest probability is obtained, thereby finally generating the sequence. The search strategy of bundle search ensures that a highly probable and semantically reasonable generated sequence is obtained.
The computer device may determine the first review process of the sample text by performing a tag consistency check on the generated third review process. Optionally, the computer device inputs the content feature of the sample text, the third review process of the sample text, and the second guidance information into the generative model, thereby obtaining a first predictive label of the sample text. The first prediction tag is used for predicting whether the content of the sample text is compliant, and the second guide information is used for guiding the generation model to generate the first prediction tag of the sample text according to the content characteristics of the sample text and the third checking process of the sample text. In the event that the first predictive label of the sample text is consistent with the audit label, the computer device may determine a third audit process of the sample text as the first audit process of the sample text for use by subsequent processes.
Optionally, in the case that the first prediction tag and the audit tag of the sample text are inconsistent, the computer device may acquire third guide information, input the content feature of the sample text and the third guide information into the generation model to regenerate a third audit process of the sample text, and then determine the first audit process of the sample text according to the regenerated third audit process of the sample text. Optionally, during a third audit process of regenerating the sample text, the computer device may further input audit labels of the sample text into the generated model for reference by the generated model. The computer device may regenerate all of the third audit processes or regenerate third audit processes that do not satisfy the tag consistency check. Optionally, the computer device may terminate the regeneration process when all of the first predictive labels are consistent with the audit labels, or the number of regenerations reaches a preset threshold. And determining the third checking process generated currently as the first checking process for use by the subsequent process. Alternatively, the computer device may count the degree of tag consistency of the auditing process generated using each guidance information and generate the auditing process using the optimized guidance information.
Optionally, the third guiding information is obtained by modifying the first guiding information. The modification process is performed manually or the modification process is automated. For modification of the first guidance information, for example, reference may be made to the following rules: (1) Adjusting the expression of the content, and using clearer and neutral language; (2) explicitly emphasizes that the audit needs to be standardized with respect to the authentic label; (3) Adding a prompt for the current error-prone content or providing a front side sample; (4) appropriately reducing or expanding the length of the guidance information. Alternatively, by inputting the first guidance information and the modification rule into the generation model, third guidance information obtained by modifying the first guidance information according to the modification rule, which is predicted by the generation model, can be obtained.
Step 406: and generating a question corresponding to the first checking process according to the first checking process through the generating model, and generating a first answer according to the first checking process and the question through the generating model.
The questions corresponding to the first review process are related to the content of the first review process. Optionally, the computer device inputs the first checking process and the corresponding guiding information into the generating model, so as to obtain the problem corresponding to the first checking process. The guiding information is used for guiding the generating model to generate a problem corresponding to the first checking process according to the first checking process, for example, the guiding generating model generates a problem of inquiring the content of the first checking process. For example, the guidance information is "please randomly generate a problem for the content of the input auditing process".
Optionally, the computer device inputs the first review process, the questions corresponding to the first review process, and the corresponding guidance information into the generation model, thereby obtaining the first answer. The guide information is used for guiding the generation model to generate answers to questions corresponding to the first checking process according to the first checking process. For example, the guidance information is "please generate an answer to its corresponding question based on the content of the input auditing process".
Step 408: and carrying out content sampling for multiple times on the first checking process through the generation model to obtain multiple sampling results of the first checking process.
The sampling result of the first review process is related to the content of the first review process, for example, to reflect the content of the first review process. For example, the sampling result is used to summarize the content of the first checking process, or extract the main content of the first checking process, or express the content of the first checking process in other expression modes, and the specific content of the sampling result is not limited in the embodiment of the present application. Optionally, each of the plurality of sampling results is different. Optionally, the computer device inputs the first review process and the guiding information of each sample into the generation model, thereby obtaining a plurality of sampling results of the first review process. For example, the guidance information includes "please generate a summary for the input audit process" and "express the content of the input audit process in another way", or "please express the content of the input audit process in 2 different ways".
For example, the content sampling process can be divided into text processing for text segmentation and vectorization and model prediction. For example, the input text is split into several segments according to a certain rule, and each segment is converted into a vector. Model prediction is used to generate a prediction result using the vectorized text described above. For example, based on the prediction result, some segments may be randomly selected from the input text, forming new text, i.e., a sampling result.
Step 410: a second answer to each sample result is generated by a generation model from each sample result of the first review process and the question.
The computer equipment inputs the sampling result of the first checking process, the questions corresponding to the first checking process and the corresponding guide information into the generating model, so that a second answer of the sampling result is obtained. By performing the above steps for each sample, a second answer for each sample is obtained. The guide information is used for guiding the generation model to generate answers to the questions corresponding to the first checking process according to the sampling results. For example, the guidance information is "please generate an answer to an input question based on the content of the input sampling result".
Step 412: and carrying out consistency check on the first answer of the first checking process and the second answer of each sampling result so as to screen out the second checking process of the sample text.
The computer device determines a coincidence rate of the first answer of the first review process and the second answer of each of the sampling results, and removes the first review process whose coincidence rate is below a coincidence rate threshold from all of the first review processes, and then determines the remaining first review processes as second review processes. Optionally, the coincidence rate is determined by calculating a ratio of the first number to the second number. The first number is the number of sampling results consistent with the first answer in the second answer of each sampling result of the first checking process, and the second number is the total number of sampling results corresponding to the first checking process.
The consistency check of the first checking process and the plurality of sampling results of the first checking process can be regarded as checking whether the content expressed by the first checking process is consistent with the content expressed by the plurality of sampling results. If not, the first review process generated is likely to have errors or "illusions". If so, the generated first review process is likely to be aligned and accurate with the input information, so that the first review process can be used as a second review process for subsequent model training in this case.
Step 414: and training a content auditing model through a second auditing process of the sample text.
The content auditing model is used for predicting labels of the text to be predicted according to the content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not. The text to be predicted is text entered by a user, including manually entered text or text generated by a machine learning model, to which embodiments of the present application are not limited. In some embodiments, the content audit model is a large language model or the content audit model is a discriminant model or a classification model.
Optionally, the computer device trains the content audit model based on the content characteristics of the sample text, the audit tag, and the second audit process. Optionally, in the process of training the content auditing model, the computer equipment inputs the content characteristics of the sample text and the second auditing process of the sample text into the content auditing model to obtain a second prediction label of the sample text, and the second prediction label is used for predicting whether the content of the sample text is compliant. And determining error loss according to the errors of the second prediction label of the sample text and the auditing label of the sample text, and training a content auditing model through the error loss. Optionally, the computer device determines a cross entropy loss from the second predictive tag of the sample text and the audit tag of the sample text, and then trains the content audit model by the cross entropy loss.
In summary, according to the method provided by the embodiment, through generating the first checking process of the sample text and performing the consistency check processing on the generated first checking process by combining the content sampling result of the first checking process, the aligned training data with higher accuracy, namely the second checking process, can be generated and screened. Since if the first audit process and its content sampling result cannot pass the consistency check, it is likely that the generated first audit process has errors or "illusions". Therefore, consistency verification is carried out on the generated data, so that the data used by model training is screened out, and errors and 'illusions' when the generated model generates the data are eliminated. The content auditing model is trained through the generated second auditing process with higher accuracy, so that the reliability of the content auditing model can be improved.
The method provided by the embodiment further generates the first answer corresponding to the question according to the first checking process, and generates the second answer according to each sampling result of the first checking process and the same question, so that consistency check can be performed on the first checking process and a plurality of sampling results of the first checking process by checking consistency of the second answer and the first answer, and a convenient and accurate mode for realizing consistency check is provided.
According to the method provided by the embodiment, the first checking process with lower consistency rate to be removed is determined by calculating the consistency rate of the second checking process and the first checking process, so that the generated data with possible errors and 'illusions' are accurately determined, and errors and 'illusions' when the generated data of the model are generated are accurately eliminated.
According to the method provided by the embodiment, the third checking process of the sample text is generated first, and then the first checking process used by the training model is determined according to the third checking process of the sample text, so that the accuracy of determining the first checking process can be improved.
The method provided by the embodiment is also beneficial to aligning the generated third checking process with the sample text by carrying out the label consistency check on the generated third checking process, so that the accuracy of the generated third checking process is improved.
According to the method provided by the embodiment, the third checking process is regenerated by modifying the guide information under the condition that the generated third checking process does not pass the tag consistency check, so that the accuracy of the generated third checking process is improved.
In the method provided by the embodiment, the corresponding audit label is input to the model in the process of regenerating the third audit process, so that the model is facilitated to generate a more accurate third audit process.
The method provided by the embodiment also builds the error loss of the model through the content characteristics of the sample text, the auditing labels and the second auditing process, and trains the generated model by the error loss, thereby providing a mode for efficiently training the model.
According to the method provided by the embodiment of the application, the consistency check can be carried out on the sampling result of the generated result (auditing process) without the need of manually marking resources, so that errors and 'illusions' existing in the language model are detected and eliminated, and the possibility is provided for constructing a more reliable and efficient content auditing model. The method provided by the embodiment of the application can be applied to generating alignment training data and further utilizing the training data to fine tune a content auditing model, and the content auditing model with strong adaptability, stability and reliability can be built by combining the illusion detection mechanism in the process of generating the training data. The method mainly comprises the following three parts: corpus generation section: automated corpus generation (generation audit process) of partially unlabeled data using a generic pre-trained language model. Corpus calibration module: and carrying out label calibration and consistency calibration (eliminating model illusion) on the generated corpus and the true labeling corpus to form a training data set. Model construction module: and training and optimizing the content auditing model by using the training set data after consistency calibration. The implementation process comprises the following steps: collecting real corpus data (sample text), and performing expert annotation to generate an initial training set; generating an audit process of the sample text by using the universal language model; extracting the generated auditing process, and aligning the auditing process with the real label of the sample text through self-counterthinking; further detecting and removing the illusion in the generated audit process (e.g., audit process that is inconsistent with the authentic label, would pass a question-and-answer (QA) consistency check) using a sample consistency method; performing content auditing model training on the calibrated and optimized data set; and (5) tuning and deploying a content auditing model to realize automatic auditing of the content.
The method provided by the embodiment of the application mainly comprises the following steps:
and step 1, generating true expert annotation data.
And collecting real corpus data (sample text), and performing expert annotation to generate an initial training set. Let the real corpus sample be x=x 1 ,x 2 ,…,x N N samples total, N being a positive integer. Each sample x i From content feature c i And audit tag y i Composition, i.e. x i =(c i ,y i ). Wherein the content feature c i Is text data. Audit tag y i E 0,1 represents the sample classification, 0 represents normal content, and 1 represents offending content.
Through manual labeling, an initial training set can be formed:the training set only comprises manually marked samples and labels, and is used for verification and model training of subsequently generated corpus.
And step 2, generating an auditing process.
An audit process of the sample text is generated using a generic generation model (language model G). Using a pre-trained language model G θ For content c i Generating an auditing process, wherein in the process, a generating task is changed into a sequence, and a source sequence is content c i The target sequence is the auditing process a i . In the generation process, at time step t, model G θ Predicting the probability of the next marker P (a) i,t ∣a i,1:t-1 ,c i )=G θ (a i,1:t-1 ,a i ) Where P represents probability, a i,t The predicted result of time step t is shown. After passing through the beam Searching (beam search) to obtain the final generation sequence beam search (G θ ,c i ). By repeating the above generation process, all the generation samples (audit process) a can be obtained 1 ,a 2 ,…,a M Where M is the number of generated samples.
Model G θ Pre-training on a large amount of real data to learn a semantically reasonable generation. There may be illusions in the individual generated samples. Subsequent steps will detect and remove these illusions to obtain more reliable training data.
And 3, checking the consistency detection of the labels in the process.
Further use of model G θ Based on content c i Audit process a i Generating audit labelsBy model self-thinking, the tag +.>With the real label y i Alignment. Specifically, when the two are not aligned, the real label information is further fed back to the model G θ Redesigning the guidance information (sample statement) for generating the auditing process, thereby regenerating the auditing process>By the self-thinking-back calibration, a high-quality auditing process after alignment can be obtained.
And step 4, eliminating illusion in the auditing process.
After the self-review, the generated review process still has illusions, and at the moment, sampling consistency check is further applied to determine the illusions in the generated review process, and the review process with the illusions is filtered. Specifically, the set of auditing procedures obtained in calibration And (2) after that:
(1) Based on auditProcess a i Generating a question q related to the content thereof and based on the auditing procedure a i And question q generates a corresponding answer a.
(2) For auditing procedure a i Repeating the sampling process Q times to obtainRepresenting the generated Q-th sampling result of the i-th auditing process.
(3) For each sampling resultAnswer the same question q to generate answer a (Q)
(4) The answer consistency ratio is calculated by the following formula:
(5) If the answer consistency C is below the threshold τ, a potential illusion is determined to exist for the corresponding audit process.
(6) The audit process where there is a illusion is removed from the training set.
Through question-answer consistency sampling, the auditing process containing illusions and the auditing process with consistent semantics can be distinguished, so that any residual potential illusions can be effectively detected and removed, and the quality of a training set is ensured.
And 5, training and deploying the content auditing model.
A content audit model is trained on the calibration data set. Under the corpus generation, calibration and quality optimization of the previous steps, a new training set is constructed:
wherein the first M samples correspond to the calibrated and optimized generation process, the last N samples correspond to the manually marked real samples,representing the auditing process.
Defining the audit model as a discriminant model The inputs are content feature c and auditing procedure->The output is a violation category prediction, expressed as follows:
the model is learned by minimizing a training loss function, which is expressed as follows:
where l (·) is the cross entropy loss function. By calibrating data sets with reasonable semantics and high qualityTraining, learning an accurate content auditing model +.>And (5) completing model construction.
Illustratively, FIG. 5 is a schematic diagram of a machine learning model provided in an exemplary embodiment of the present application. As shown in fig. 5, the machine learning model 501 includes an encoding network 502 and a decoding network 503. By inputting the input information into the encoding network 502, the feature extraction result of the encoding network 502 for the input information, that is, the encoded information can be obtained. By inputting the encoded information into the decoding network 503, the output information output by the decoding network 503, that is, the prediction information conforming to the input information can be obtained. The encoding network 502 and the decoding network 503 are in an N-layer structure, the encoding network 502 is in a structure of N encoder cascade, and the decoding network 503 is in a structure of N decoder cascade. The structure of each layer of the encoding network 502 is identical, the structure of each layer of the decoding network 503 is identical, and the structure of each layer in the encoding network 502 is similar to the decoding network 503.
Illustratively, fig. 6 is a schematic structural diagram of an encoder provided in an exemplary embodiment of the present application. As shown in fig. 6, each layer of the encoding network 502 (encoder 504) typically includes a Multi-Head Self-Attention Module (Multi-Head Self-Attention Module), the "Self-Attention" in the encoder 504 structure of fig. 6. And a feed-forward full connection network (also known as feed-forward network (Feed Forward Network, FFN)), the "feed-forward full connection" in the encoder 504 architecture of fig. 6.
Illustratively, fig. 7 is a schematic diagram of a decoder provided in an exemplary embodiment of the present application. As shown in fig. 7, each layer (decoder 505) of the decoding network 503 generally includes a Mask Multi-Head Self-Attention Module (Mask Multi-Head Self-Attention Module), which is referred to as a Multi-Head Self-Attention Module, i.e., the "Self-Attention" below in the structure of the decoder 505 in fig. 7. A Cross encoder and decoder Self-Attention Module (also called Cross Self-Attention Module) can be considered a multi-headed Self-Attention Module, i.e., the middle "Self-Attention" in the decoder 505 structure in fig. 7. And a feed-forward full connection module, the "feed-forward full connection" above the decoder 505 structure in fig. 7.
Wherein the multi-headed self-attention module of the encoding network 502 is used to obtain the weight relationship of each word in the input text as compared to other words in the input text. The feed-forward fully connected module of the encoding network 502 is used to perform a nonlinear transformation on the input characteristics. The masking multi-headed self-attention module of the decoding network 503 functions similarly to the multi-headed self-attention module of the encoding network 502, except that it is also used to make the decoding network 503 unable to obtain a prediction result corresponding to a word following a word in the input text when generating a prediction result corresponding to a word in the input text (a prediction result corresponding to the input text is input at a lower position in the decoder structure when training). The cross-over self-attention module of the decoding network 503 functions similarly to the multi-head self-attention module of the encoding network 502, except that the input received is composed of the output information of the last module in the decoding network 503 and the output information of the last layer in the encoding network 502. The feed-forward fully connected module of the decoding network 503 functions similarly to the feed-forward fully connected module of the encoding network 502.
In addition, with continued reference to fig. 5, 6, 7, each of the above-described modules (multi-headed self-attention module, feed-forward full-connection module) in the encoding network 502 and decoding network 503 of the machine learning model 501 is provided with a residual connection and layer normalization (LayerNorm) layer (i.e., residual & normalization (Add & Norm) in fig. 6, 7). The residual connection is considered as a structure in which the output of a certain module of the model is used as the input of a certain module which is not adjacent to the model, and is used for reducing the complexity of the model and preventing the gradient from disappearing. The layer normalization layer is used for normalizing input information, such as normalization. The residual connection and the structure of the layer normalization layer are used for stabilizing the training of the model. For the training process of the machine learning model 501, the computer device can acquire a large amount of text data to pretrain the machine learning model 501, so that the machine learning model 501 has good generalization effect for texts in different fields.
Generative models are generally divided into three types: autoregressive models, autorecoding models, and sequence-to-sequence models.
The autoregressive model (Autoregressive Model) is pre-trained with classical language model tasks, given the above, predictive infra, whose structure corresponds to the decoding network portion of the machine learning model in fig. 5. Since the decoding network can only see the above but cannot see the following features, it is generally used for the task of text generation.
The self-coding model (Auto Encoder Model) is pre-trained with the task of sentence reconstruction, i.e. breaking sentences in some way, possibly masking, possibly misordering, hopefully the model will be restored to the broken part, the structure of which corresponds to the coding network part of the machine learning model in fig. 5. Unlike autoregressive models, the models can see both the above and the below information, and due to this feature, autorecording models are often used for tasks of natural language understanding, such as text classification, reading understanding, and the like.
The sequence-to-sequence model (Sequence to Sequence Model) is the coding network portion and decoding network portion that use the machine learning model of fig. 5 at the same time. The most natural application of this model is text summarization, machine translation, and the like, and virtually all NLP tasks can be solved by sequence-to-sequence.
Optionally, the generating model in the embodiment of the present application includes at least one of the above-mentioned autoregressive model, self-encoding model, and sequence-to-sequence model.
The method provided by the embodiment of the application is applied to training a message flow auditing model for example, and the message flow auditing model is a content auditing model for auditing the content of the social message flow. Fig. 8 is a flowchart of a training method of an information flow audit model according to an exemplary embodiment of the present application. The method may be used with a computer device or a client on a computer device. As shown in fig. 8, the method includes:
step 802: text data of a plurality of sample social information streams and audit labels of each sample social information stream are obtained.
The social information stream includes social content for display in a stream interface, which is a user interface in a client supporting posting of social content, the stream interface including social content posted by different users in time series, such as content described for users reflecting their current mood status, content reflecting their current behavior, and content reflecting their thought results, etc. The audit tag is used to reflect whether the content of the sample social information stream is compliant. Optionally, the audit tag is manually generated from text data of the sample social information stream.
Step 804: and generating a first checking process of the sample social information flow according to the text data of the sample social information flow through a generating model.
The generation model is used to generate output information that matches the input information. In some embodiments, the generative model comprises a large language model. The first checking process is used for expressing a process of reasoning whether the content of the sample social information flow is compliant or not, and natural language description can be adopted.
Optionally, the computer device inputs the text data and the first guide information of the sample social information stream into the generation model, so as to obtain a third checking process of the sample social information stream, wherein the third checking process is used for expressing a process of reasoning whether the content of the sample social information stream is compliant. The first guide information is used for guiding the generation model to generate a third checking process of the sample social information flow according to the text data of the sample social information flow. Optionally, the first guidance information is manually set. The computer device may then determine a first review process of the sample social information stream from a third review process of the sample social information stream.
The computer device may determine a first review process of the sample social information stream by performing a tag consistency check on the generated third review process. Optionally, the computer device inputs the text data of the sample social information stream, the third review process of the sample social information stream, and the second guide information into the generation model, thereby obtaining a first prediction tag of the sample social information stream. The first prediction tag is used for predicting whether the content of the sample social information stream is compliant, and the second guide information is used for guiding the generation model to generate the first prediction tag of the sample social information stream according to the text data of the sample social information stream and the third checking process of the sample social information stream. In the event that the first predictive tag of the sample social information stream is consistent with the audit tag, the computer device may determine a third audit process of the sample social information stream as the first audit process of the sample social information stream for use by a subsequent process.
Optionally, under the condition that the first prediction tag and the audit tag of the sample social information stream are inconsistent, the computer device may acquire third guide information, input text data of the sample social information stream and the third guide information into the generation model to regenerate a third audit process of the sample social information stream, and then determine the first audit process of the sample social information stream according to the regenerated third audit process of the sample social information stream. Optionally, during a third audit process of regenerating the sample social information stream, the computer device may further input audit labels of the sample social information stream into the generation model for reference by the generation model. The computer device may regenerate all of the third audit processes or regenerate third audit processes that do not satisfy the tag consistency check. Optionally, the computer device may terminate the regeneration process when all of the first predictive labels are consistent with the audit labels, or the number of regenerations reaches a preset threshold. And determining the third checking process generated currently as the first checking process for use by the subsequent process. Alternatively, the computer device may count the degree of tag consistency of the auditing process generated using each guidance information and generate the auditing process using the optimized guidance information. Optionally, the third guiding information is obtained by modifying the first guiding information. The modification process is performed manually or the modification process is automated.
Step 806: and generating a question corresponding to the first checking process according to the first checking process through the generating model, and generating a first answer according to the first checking process and the question through the generating model.
The questions corresponding to the first review process are related to the content of the first review process. Optionally, the computer device inputs the first checking process and the corresponding guiding information into the generating model, so as to obtain the problem corresponding to the first checking process. The guiding information is used for guiding the generating model to generate a problem corresponding to the first checking process according to the first checking process, for example, the guiding generating model generates a problem of inquiring the content of the first checking process.
Optionally, the computer device inputs the first review process, the questions corresponding to the first review process, and the corresponding guidance information into the generation model, thereby obtaining the first answer. The guide information is used for guiding the generation model to generate answers to questions corresponding to the first checking process according to the first checking process.
Step 808: and carrying out content sampling for multiple times on the first checking process through the generation model to obtain multiple sampling results of the first checking process.
The sampling result of the first review process is related to the content of the first review process. Optionally, each of the plurality of sampling results is different. Optionally, the computer device inputs the first review process and the guiding information of each sample into the generation model, thereby obtaining a plurality of sampling results of the first review process. For example, the guidance information includes "please generate a summary for the input audit process" and "express the content of the input audit process in another way", or "please express the content of the input audit process in 2 different ways".
Step 810: a second answer to each sample result is generated by a generation model from each sample result of the first review process and the question.
The computer equipment inputs the sampling result of the first checking process, the questions corresponding to the first checking process and the corresponding guide information into the generating model, so that a second answer of the sampling result is obtained. By performing the above steps for each sample, a second answer for each sample is obtained. The guide information is used for guiding the generation model to generate answers to the questions corresponding to the first checking process according to the sampling results.
Step 812: and carrying out consistency check on the first answer of the first checking process and the second answer of each sampling result so as to screen out the second checking process of the sample social information flow.
The computer device determines a coincidence rate of the first answer of the first review process and the second answer of each of the sampling results, and removes the first review process whose coincidence rate is below a coincidence rate threshold from all of the first review processes, and then determines the remaining first review processes as second review processes. Optionally, the coincidence rate is determined by calculating a ratio of the first number to the second number. The first number is the number of sampling results consistent with the first answer in the second answer of each sampling result of the first checking process, and the second number is the total number of sampling results corresponding to the first checking process.
The consistency check of the first checking process and the plurality of sampling results of the first checking process can be regarded as checking whether the content expressed by the first checking process is consistent with the content expressed by the plurality of sampling results. If not, the first review process generated is likely to have errors or "illusions". If so, the generated first review process is likely to be aligned and accurate with the input information, so that the first review process can be used as a second review process for subsequent model training in this case.
Step 814: training an information flow auditing model through a second auditing process of the sample social information flow.
The information flow auditing model is a content auditing model for auditing the social information flow content. The information flow auditing model is used for predicting labels of the social information flow to be predicted according to text data of the social information flow to be predicted, and the labels of the social information flow to be predicted are used for predicting whether the content of the social information flow to be predicted is compliant or not. Optionally, the social information stream to be predicted includes social content to be published in the information stream interface by the user, and when the user selects to publish the social content in the information stream interface, the audit of the information stream audit model is triggered. In some embodiments, the information flow auditing model is deployed in a background server that supports clients that post social content. In some embodiments, the information flow audit model is a large language model or the information flow audit model is a discriminant model or a classification model.
Optionally, the computer device trains the information flow audit model according to the text data of the sample social information flow, the audit tag and the second audit process. Optionally, in the process of training the information flow auditing model, the computer device inputs text data of the sample social information flow and a second auditing process of the sample social information flow into the information flow auditing model to obtain a second prediction tag of the sample social information flow, where the second prediction tag is used for predicting whether the content of the sample social information flow is compliant. And determining error loss according to errors of the second prediction tag of the sample social information stream and the auditing tag of the sample social information stream, and training an information stream auditing model through the error loss. Optionally, the computer device determines a cross entropy loss from the second predictive tag of the sample social information stream and the audit tag of the sample social information stream, and then trains the information stream audit model by the cross entropy loss.
In summary, according to the method provided by the embodiment, through generating the first verification process of the sample social information stream and performing the consistency check processing on the generated first verification process by combining the content sampling result of the first verification process, the aligned training data with higher accuracy, namely the second verification process, can be generated and screened. Since if the first audit process and its content sampling result cannot pass the consistency check, it is likely that the generated first audit process has errors or "illusions". Therefore, consistency verification is carried out on the generated data, so that the data used by model training is screened out, and errors and 'illusions' when the generated model generates the data are eliminated. The information flow auditing model is trained through the generated second auditing process with higher accuracy, so that the reliability of the information flow auditing model can be improved, and the accuracy of content auditing of the social information flow is improved.
It should be noted that, before collecting relevant data (for example, sample text and text to be predicted in the present application) of a user and during collecting relevant data of a user, a prompt interface, a popup window or output voice prompt information may be displayed, where the prompt interface, the popup window or the voice prompt information is used to prompt the user to collect relevant data currently, so that the present application only starts to execute a relevant step of obtaining relevant data of the user after obtaining a confirmation operation sent by the user to the prompt interface or the popup window, otherwise (i.e., when no confirmation operation is obtained by the user to the prompt interface or the popup window), ends the relevant step of obtaining relevant data of the user, i.e., no relevant data of the user is obtained. In other words, all user data collected in the present application is collected with the consent and authorization of the user, and the collection, use and processing of relevant user data requires compliance with relevant laws and regulations and standards of the relevant country and region.
It should be noted that, the sequence of the steps of the method provided in the embodiment of the present application may be appropriately adjusted, the steps may also be increased or decreased according to the situation, and any method that is easily conceivable to be changed by those skilled in the art within the technical scope of the present application should be covered within the protection scope of the present application, so that no further description is given.
Fig. 9 is a schematic structural diagram of a training device of a content audit model according to an exemplary embodiment of the present application. As shown in fig. 9, the apparatus includes:
the obtaining module 901 is configured to obtain content features of a plurality of sample texts and an audit tag of each sample text, where the audit tag is configured to reflect whether the content of the sample text is compliant;
a generating module 902, configured to generate, by using a generating model, a first review process of the sample text according to content features of the sample text, where the first review process is used to express a process of reasoning about whether content of the sample text is compliant;
the verification module 903 is configured to sample the content of the first verification process multiple times, so as to obtain multiple sampling results of the first verification process; and a second checking process of checking consistency of the first checking process and the sampling results of the first checking process to screen out the sample text;
the training module 904 is configured to train the content audit model through a second audit process of the sample text, where the content audit model is configured to predict a label of the text to be predicted according to content features of the text to be predicted, and the label of the text to be predicted is configured to predict whether the content of the text to be predicted is compliant.
In an optional design, the generating module 902 is configured to generate, by using the generating model, a question corresponding to the first review process according to the first review process, where the question is related to the content of the first review process; generating a first answer according to the first examination process and the question through the generation model; performing multiple content sampling on the first checking process through the generation model to obtain multiple sampling results of the first checking process; generating a second answer of each sampling result according to each sampling result of the first checking process and the questions through the generating model;
the verification module 903 is configured to perform consistency verification on the first answer of the first verification process and the second answer of each sampling result, so as to screen out a second verification process of the sample text.
In an alternative design, the verification module 903 is configured to determine a consistency ratio of the first answer of the first verification process and the second answer of each sampling result; and removing the first checking processes with the consistency rate lower than a consistency rate threshold value from all the first checking processes, and determining the rest first checking processes as the second checking processes.
In an optional design, the generating module 902 is configured to input content features of the sample text and first guide information into the generating model, so as to obtain a third review process of the sample text, where the first guide information is used to guide the generating model to generate the third review process of the sample text according to the content features of the sample text;
as shown in fig. 10, the apparatus further includes a determining module 905, where the determining module 905 is configured to determine a first review process of the sample text according to a third review process of the sample text.
In an optional design, the generating module 902 is configured to input the content feature of the sample text, the third review process of the sample text, and second guide information into the generating model, to obtain a first prediction tag of the sample text, where the first prediction tag is used to predict whether the content of the sample text is compliant, and the second guide information is used to guide the generating model to generate the first prediction tag of the sample text according to the content feature of the sample text and the third review process of the sample text;
the determining module 905 is configured to determine, when the first prediction tag is consistent with the audit tag, a third audit process of the sample text as the first audit process of the sample text.
In an optional design, the obtaining module 901 is configured to obtain third guidance information, where the first prediction tag is inconsistent with the audit tag, where the third guidance information is obtained by modifying the first guidance information;
the generating module 902 is configured to input the content feature of the sample text and the third guidance information into the generating model, so as to regenerate a third review process of the sample text;
the determining module 905 is configured to determine a first review process of the sample text according to the regenerated third review process of the sample text.
In an alternative design, the generating module 902 is configured to input the audit tag of the sample text into the generating model during a third audit process of regenerating the sample text.
In an alternative design, the training module 904 is configured to train the content audit model according to content features of the sample text, the audit tag, and the second audit process.
In an alternative design, the training module 904 is configured to input the content feature of the sample text and the second review process of the sample text into the content review model to obtain a second prediction tag of the sample text; determining error loss according to the error between the second prediction label of the sample text and the audit label of the sample text; and training the content auditing model through the error loss.
It should be noted that: the training device for the content audit model provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the training device of the content auditing model provided in the above embodiment and the training method embodiment of the content auditing model belong to the same concept, and detailed implementation processes of the training device and the training method embodiment of the content auditing model are detailed in the method embodiment and are not described herein.
Embodiments of the present application also provide a computer device comprising: the system comprises a processor and a memory, wherein at least one instruction, at least one section of program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by the processor to realize the training method of the content audit model provided by each method embodiment.
Illustratively, fig. 11 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
The computer apparatus 1100 includes a central processing unit (Central Processing Unit, CPU) 1101, a system Memory 1104 including a random access Memory (Random Access Memory, RAM) 1102 and a Read-Only Memory (ROM) 1103, and a system bus 1105 connecting the system Memory 1104 and the central processing unit 1101. The computer device 1100 also includes a basic Input/Output system (I/O) 1106, which helps to transfer information between the various devices within the computer device, and a mass storage device 1107 for storing an operating system 1113, application programs 1114, and other program modules 1115.
The basic input/output system 1106 includes a display 1108 for displaying information and an input device 1109, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1108 and the input device 1109 are both coupled to the central processing unit 1101 through an input-output controller 1110 coupled to the system bus 1105. The basic input/output system 1106 may also include an input/output controller 1110 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1110 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1107 is connected to the central processing unit 1101 through a mass storage controller (not shown) connected to the system bus 1105. The mass storage device 1107 and its associated computer-readable storage medium provide non-volatile storage for the computer device 1100. That is, the mass storage device 1107 may include a computer-readable storage medium (not shown) such as a hard disk or a compact disk-Only (CD-ROM) drive.
The computer-readable storage medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable storage instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-Only register (Erasable Programmable Read Only Memory, EPROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other solid state Memory devices, CD-ROM, digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1104 and mass storage device 1107 described above may be collectively referred to as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1101, the one or more programs containing instructions for implementing the above-described method embodiments, the central processing unit 1101 executing the one or more programs to implement the methods provided by the various method embodiments described above.
According to various embodiments of the present application, the computer device 1100 may also operate by a remote computer device connected to the network through a network, such as the Internet. I.e., the computer device 1100 may be connected to the network 1112 through a network interface unit 1111 coupled to the system bus 1105, or the network interface unit 1111 may be used to connect to other types of networks or remote computer device systems (not shown).
The memory also includes one or more programs stored in the memory, the one or more programs including steps for performing the methods provided by the embodiments of the present application, as performed by the computer device.
The embodiment of the application also provides a computer readable storage medium, wherein at least one instruction, at least one section of program, code set or instruction set is stored in the computer readable storage medium, and when the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by a processor of computer equipment, the training method of the content audit model provided by the embodiment of the method is realized.
The present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the training method of the content audit model provided by the above method embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above mentioned computer readable storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely illustrative of the present application and is not intended to limit the invention to the particular embodiments shown, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and principles of the invention.

Claims (15)

1. A method for training a content audit model, the method comprising:
acquiring content characteristics of a plurality of sample texts and audit labels of each sample text, wherein the audit labels are used for reflecting whether the content of the sample text is compliant or not;
generating a first checking process of the sample text according to the content characteristics of the sample text through a generating model, wherein the first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not;
sampling the content of the first checking process for a plurality of times to obtain a plurality of sampling results of the first checking process; and a second checking process of checking consistency of the first checking process and the sampling results of the first checking process to screen out the sample text;
And training the content auditing model through a second auditing process of the sample text, wherein the content auditing model is used for predicting labels of the text to be predicted according to content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not.
2. The method of claim 1, wherein the performing the content sampling multiple times on the first review process obtains multiple sampling results of the first review process; and a second checking process for checking consistency of the first checking process and a plurality of sampling results of the first checking process to screen out the sample text, including:
generating a problem corresponding to the first checking process according to the first checking process through the generating model, wherein the problem is related to the content of the first checking process; generating a first answer according to the first examination process and the question through the generation model;
performing multiple content sampling on the first checking process through the generation model to obtain multiple sampling results of the first checking process;
generating a second answer of each sampling result according to each sampling result of the first checking process and the questions through the generating model;
And carrying out consistency check on the first answer of the first checking process and the second answer of each sampling result so as to screen out a second checking process of the sample text.
3. The method of claim 2, wherein said performing a consistency check on said first answer to said first review process and said second answer to each of said sample results to screen out a second review process of said sample text comprises:
determining a coincidence rate of the first answer of the first review process and the second answer of each sampling result;
and removing the first checking processes with the consistency rate lower than a consistency rate threshold value from all the first checking processes, and determining the rest first checking processes as the second checking processes.
4. A method according to any one of claims 1 to 3, wherein the generating a first review of the sample text by generating a model from the content characteristics of the sample text comprises:
inputting the content characteristics of the sample text and first guide information into the generation model to obtain a third checking process of the sample text, wherein the first guide information is used for guiding the generation model to generate the third checking process of the sample text according to the content characteristics of the sample text;
And determining a first checking process of the sample text according to the third checking process of the sample text.
5. The method of claim 4, wherein the determining the first review process of the sample text from the third review process of the sample text comprises:
inputting the content characteristics of the sample text, a third examination process of the sample text and second guide information into the generation model to obtain a first prediction tag of the sample text, wherein the first prediction tag is used for predicting whether the content of the sample text is compliant, and the second guide information is used for guiding the generation model to generate the first prediction tag of the sample text according to the content characteristics of the sample text and the third examination process of the sample text;
and determining a third checking process of the sample text as a first checking process of the sample text under the condition that the first prediction tag is consistent with the checking tag.
6. The method of claim 5, wherein the method further comprises:
acquiring third guide information obtained by modifying the first guide information under the condition that the first prediction tag is inconsistent with the audit tag;
Inputting the content features of the sample text and the third guide information into the generation model to regenerate a third review process of the sample text;
and determining a first checking process of the sample text according to the regenerated third checking process of the sample text.
7. The method of claim 6, wherein the method further comprises:
and in the process of regenerating the third audit process of the sample text, inputting the audit tag of the sample text into the generation model.
8. A method according to any one of claims 1 to 3, wherein the training of the content audit model by the second audit process of the sample text comprises:
and training the content auditing model according to the content characteristics of the sample text, the auditing labels and the second auditing process.
9. The method of claim 8, wherein the training the content audit model based on the content features of the sample text, the audit tag, and the second audit process comprises:
inputting the content characteristics of the sample text and the second verification process of the sample text into the content verification model to obtain a second prediction tag of the sample text;
Determining error loss according to the error between the second prediction label of the sample text and the audit label of the sample text;
and training the content auditing model through the error loss.
10. A training apparatus for a content review model, the apparatus comprising:
the system comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring content characteristics of a plurality of sample texts and verification tags of each sample text, and the verification tags are used for reflecting whether the content of the sample text is compliant or not;
the generation module is used for generating a first checking process of the sample text according to the content characteristics of the sample text through a generation model, wherein the first checking process is used for expressing a process of reasoning whether the content of the sample text is compliant or not;
the verification module is used for sampling the content of the first checking process for a plurality of times to obtain a plurality of sampling results of the first checking process; and a second checking process of checking consistency of the first checking process and the sampling results of the first checking process to screen out the sample text;
the training module is used for training the content auditing model through a second auditing process of the sample text, wherein the content auditing model is used for predicting labels of the text to be predicted according to content characteristics of the text to be predicted, and the labels of the text to be predicted are used for predicting whether the content of the text to be predicted is compliant or not.
11. The apparatus of claim 10, wherein the device comprises a plurality of sensors,
the generation module is used for generating a problem corresponding to the first checking process according to the first checking process through the generation model, wherein the problem is related to the content of the first checking process; generating a first answer according to the first examination process and the question through the generation model; performing multiple content sampling on the first checking process through the generation model to obtain multiple sampling results of the first checking process; generating a second answer of each sampling result according to each sampling result of the first checking process and the questions through the generating model;
and the verification module is used for carrying out consistency verification on the first answer of the first verification process and the second answer of each sampling result so as to screen out a second verification process of the sample text.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
the verification module is used for determining the consistency rate of the first answer of the first verification process and the second answer of each sampling result; and removing the first checking processes with the consistency rate lower than a consistency rate threshold value from all the first checking processes, and determining the rest first checking processes as the second checking processes.
13. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement a method of training a content audit model according to any of claims 1 to 9.
14. A computer readable storage medium having stored therein at least one program loaded and executed by a processor to implement a method of training a content audit model according to any of claims 1 to 9.
15. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which computer instructions a processor of a computer device reads, the processor executing the computer instructions, causing the computer device to perform the method of training a content auditing model according to any of claims 1 to 9.
CN202311286453.0A 2023-09-28 2023-09-28 Training method, device, equipment and storage medium of content auditing model Pending CN117312562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311286453.0A CN117312562A (en) 2023-09-28 2023-09-28 Training method, device, equipment and storage medium of content auditing model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311286453.0A CN117312562A (en) 2023-09-28 2023-09-28 Training method, device, equipment and storage medium of content auditing model

Publications (1)

Publication Number Publication Date
CN117312562A true CN117312562A (en) 2023-12-29

Family

ID=89242195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311286453.0A Pending CN117312562A (en) 2023-09-28 2023-09-28 Training method, device, equipment and storage medium of content auditing model

Country Status (1)

Country Link
CN (1) CN117312562A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117688164A (en) * 2024-02-03 2024-03-12 北京澜舟科技有限公司 Illusion detection method, system and storage medium based on large language model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117688164A (en) * 2024-02-03 2024-03-12 北京澜舟科技有限公司 Illusion detection method, system and storage medium based on large language model
CN117688164B (en) * 2024-02-03 2024-05-17 北京澜舟科技有限公司 Illusion detection method, system and storage medium based on large language model

Similar Documents

Publication Publication Date Title
CN109344404B (en) Context-aware dual-attention natural language reasoning method
US7685082B1 (en) System and method for identifying, prioritizing and encapsulating errors in accounting data
CN112507628B (en) Risk prediction method and device based on deep bidirectional language model and electronic equipment
CN116992005B (en) Intelligent dialogue method, system and equipment based on large model and local knowledge base
CN115357719B (en) Power audit text classification method and device based on improved BERT model
CN114443899A (en) Video classification method, device, equipment and medium
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN114372532B (en) Method, device, equipment, medium and product for determining label labeling quality
Zhang Voice keyword retrieval method using attention mechanism and multimodal information fusion
CN115310551A (en) Text analysis model training method and device, electronic equipment and storage medium
CN114386436B (en) Text data analysis method, model training method, device and computer equipment
Yadav et al. A novel automated depression detection technique using text transcript
Moon et al. Natural language processing based advanced method of unnecessary video detection
CN117312562A (en) Training method, device, equipment and storage medium of content auditing model
CN117312514A (en) Consultation reply method, consultation reply device and computer readable storage medium
CN115273856A (en) Voice recognition method and device, electronic equipment and storage medium
CN114330483A (en) Data processing method, model training method, device, equipment and storage medium
CN117634431A (en) Method and system for evaluating text style conversion quality
CN116450848B (en) Method, device and medium for evaluating computing thinking level based on event map
CN113918710A (en) Text data processing method and device, electronic equipment and readable storage medium
Habeeb Hate Speech Detection using Deep Learning Master thesis
CN116757493A (en) Virtual object service evaluation method and related device
CN116861913A (en) Position detection method based on GPT large model and related equipment
Akula et al. Credibility of social-media content using bidirectional long short-term memory-recurrent neural networks
CN112528015B (en) Method and device for judging rumor in message interactive transmission

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication