CN113010721B - Picture auditing method and device, electronic equipment and storage medium - Google Patents
Picture auditing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113010721B CN113010721B CN202110287468.3A CN202110287468A CN113010721B CN 113010721 B CN113010721 B CN 113010721B CN 202110287468 A CN202110287468 A CN 202110287468A CN 113010721 B CN113010721 B CN 113010721B
- Authority
- CN
- China
- Prior art keywords
- model
- risk
- picture
- sub
- auditing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012550 audit Methods 0.000 claims abstract description 119
- 238000001514 detection method Methods 0.000 claims description 8
- 238000012552 review Methods 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5846—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Library & Information Science (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Economics (AREA)
- General Engineering & Computer Science (AREA)
- Game Theory and Decision Science (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosure discloses a picture auditing method, a picture auditing device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, and particularly relates to the aspects of computer vision and deep learning. The specific implementation scheme is as follows: inputting the picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an audit result output by each risk sub-model; and judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model. The embodiment of the application can effectively improve the efficiency of the picture auditing and can also ensure the accuracy of the picture auditing.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, and further relates to aspects of computer vision and deep learning, in particular to a picture auditing method, a picture auditing device, electronic equipment and a storage medium.
Background
With the development and progress of science and technology, in the wind control work such as picture content auditing, mainstream auditing algorithms have become artificial intelligence (Artificial Intelligence, abbreviated as AI) auditing for pictures through a computer vision model. However, the performance of a single computer vision model is often limited, and in actual work, a plurality of models can be split according to different reasons such as risk types, and each model can only identify part of risks.
By adopting the picture auditing method in the prior art, auditing efficiency and accuracy cannot be achieved at the same time.
Disclosure of Invention
The application provides a picture auditing method, a picture auditing device, electronic equipment and a storage medium, which can effectively improve the picture auditing efficiency and ensure the picture auditing accuracy.
According to a first aspect of the present application, there is provided a picture auditing method, the method comprising:
inputting the picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; wherein the generic audit model comprises at least one generic sub-model;
determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk audit model includes at least one risk sub-model;
Inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an audit result output by each risk sub-model;
and judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
According to a second aspect of the present application, there is provided a picture auditing apparatus, the apparatus comprising: the system comprises a general auditing module, a determining model, a risk auditing module and a judging module; wherein,,
the general auditing module is used for inputting the picture to be audited into each general sub-model in the general auditing model to obtain the auditing result output by each general sub-model; wherein the generic audit model comprises at least one generic sub-model;
the determining module is used for determining at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk audit model includes at least one risk sub-model;
the risk auditing module is used for inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain auditing results output by each risk sub-model;
And the judging module is used for judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
According to a third aspect of the present application, there is provided an electronic device comprising:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for auditing pictures according to any embodiment of the present application.
According to a fourth aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements a picture auditing method according to any embodiment of the present application.
According to a fifth aspect of the present application, there is provided a computer program product which when executed by a computer device implements the picture auditing method according to any embodiment of the present application.
According to the technical scheme provided by the application, the efficiency of picture auditing can be effectively improved, and meanwhile, the accuracy of picture auditing can be ensured.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
fig. 1 is a schematic flow chart of a picture auditing method according to an embodiment of the present application;
fig. 2 is a second flow chart of a picture auditing method according to an embodiment of the present application;
FIG. 3 is a third flow chart of a picture auditing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a picture auditing framework according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a picture auditing device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a picture review method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example 1
Fig. 1 is a schematic flow chart of a picture auditing method according to an embodiment of the present application, where the method may be performed by a picture auditing apparatus or an electronic device, and the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated into any intelligent device having a network communication function. As shown in fig. 1, the picture auditing method may include the following steps:
s101, inputting a picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; wherein the generic audit model includes at least one generic sub-model.
In the step, the electronic equipment can input the picture to be audited into each general sub-model in the general audit model to obtain the audit result output by each general sub-model; wherein the generic audit model includes at least one generic sub-model. Specifically, it is assumed that the general audit model may include N general sub-models, respectively: general sub-model 1, general sub-models 2, …, general sub-model N; wherein N is a natural number greater than or equal to 1; the electronic device may input the pictures to be audited to the general sub-model 1, the general sub-models 2 and …, and the general sub-model N, respectively, to obtain an audit result output by the general sub-model 1, an audit result output by the general sub-model 2, an audit result output by the general sub-model …, and an audit result output by the general sub-model N.
S102, determining at least one risk sub-model corresponding to a picture to be audited in a risk audit model according to audit results output by all the general sub-models; wherein the risk audit model includes at least one risk sub-model.
In the step, the electronic equipment can determine at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk audit model includes at least one risk sub-model. Specifically, the general audit model in the embodiment of the present application includes, but is not limited to, at least one of the following general sub-models: cartoon/real submodel, character detection submodel, human attribute submodel, and character recognition submodel.
S103, inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an audit result output by each risk sub-model.
In this step, the electronic device may input the picture to be audited to each risk sub-model in the at least one risk sub-model, so as to obtain an audit result output by each risk sub-model. Specifically, the number of corresponding risk submodels of the picture to be audited in the risk audit model is assumed to be M, namely a risk submodel 1, a risk submodel 2, a risk submodel … and a risk submodel M; wherein M is a natural number greater than or equal to 1. The electronic equipment can input the pictures to be audited into the risk sub-model 1, the risk sub-models 2 and … and the risk sub-model M respectively to obtain an audit result output by the risk sub-model 1; an auditing result output by the risk sub-model 2; …; and (5) outputting an auditing result by the risk submodel M.
And S104, judging the final auditing result of the picture to be audited based on the auditing result output by each risk sub-model.
In this step, the electronic device may determine a final audit result of the picture to be audited based on the audit results output by each risk sub-model. Specifically, if at least two auditing results in the auditing results output by each risk sub-model meet a preset merging condition, the electronic equipment can merge the at least two auditing results in the auditing results output by each risk sub-model to obtain merged auditing results; and then, judging the final auditing result of the picture to be audited based on the combined auditing result.
According to the picture auditing method provided by the embodiment of the application, firstly, a picture to be audited is input into each general sub-model in the general auditing model, and auditing results output by each general sub-model are obtained; then, according to the auditing results output by the general sub-models, determining at least one risk sub-model corresponding to the picture to be audited in the risk auditing model; inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an auditing result output by each risk sub-model; and finally, judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model. That is, the application can audit the pictures based on the model of the hierarchical structure, and takes the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, the reasonably used auditing risk models can be screened out, so that the number of auditing risk models can be reduced, and auditing time is further shortened. In the existing picture auditing method, one picture needs to sequentially pass through a plurality of risk models, and then all calculation results of the risk models are summarized and audited to obtain a final risk identification result. Because the application adopts the technical means of auditing the pictures based on the model of the hierarchical structure, the technical problems that the auditing efficiency and the accuracy rate are often not compatible in the prior art, so that the auditing efficiency is reduced, and the confidence of the output problem label is not high or too high, so that the risk picture leaks out or the risk-free picture is killed by mistake, and the risk is caused to the online business are solved; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example two
Fig. 2 is a second flow chart of a picture auditing method according to an embodiment of the present application. Further optimization and expansion based on the above technical solution can be combined with the above various alternative embodiments. As shown in fig. 2, the picture auditing method may include the steps of:
s201, inputting a picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; wherein the generic audit model includes at least one generic sub-model.
S202, determining at least one risk sub-model corresponding to a picture to be audited in a risk audit model according to audit results output by all the general sub-models; wherein the risk audit model includes at least one risk sub-model.
In the step, the electronic equipment can determine at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model; wherein the risk audit model includes at least one risk sub-model. Specifically, if at least one general sub-model in the general auditing sub-models detects that character information exists in the picture to be audited, the electronic device can determine that a risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the character information; or if at least one general sub-model in the general auditing sub-models detects that text information exists in the image to be audited, the electronic equipment can determine that a risk sub-model corresponding to the image to be audited in the risk auditing model is a model associated with the text information; or if at least one general sub-model in the general auditing sub-models detects that the picture to be audited is a cartoon picture or a real picture, the electronic equipment can determine that the risk sub-model corresponding to the picture to be audited in the risk auditing model is a model associated with the cartoon picture or a model associated with the real picture.
When auditing risk pictures, most risks often require preconditions in terms of content composition. For example, the risk of the picture is low, the risk of the sensitive person is related to the person, and the existence of the person in the picture is required; illegal game name recognition, infringement risk brand name recognition and the like are related to characters, and the existence of the characters in the picture is required. These preconditions can be summarized from methods such as routine experience and legal analysis, and can be used as an underlying label, and can be abstracted into a technical-level model solution as an underlying model in the overall auditing process. Specifically, the electronic device may perform the following aspects of auditing for the picture to be audited through the general auditing model: 1) Checking whether a person in the picture is used as a bottom label: for risks such as subordinated, infringement and the like, if no person exists in the picture, the risks of the categories are not generated, so that a human body detection model can be used as a bottom model. Some classical human body detection algorithms, such as hog+ Adaboost, RCNN, yolo models, can detect human body; meanwhile, some face detection algorithms, such as models of MTCNN, S3FD and the like, can be used. 2) Checking whether characters in the picture are used as bottom layer labels: for risks such as false exaggeration of pictures, illegal games and the like, if no characters exist in the pictures, the risks of the classes are avoided, so that a character detection model can be used as a bottom model. Some classical OCR algorithms, such as EAST, textBoxex ++, can recognize the presence or absence of text in an image and the text content. 3) The audit picture is a cartoon picture or a real picture as a bottom label: for the actual risk model construction, the risks of cartoon types are often independently constructed into models, such as cartoon low-custom risks, sensitive character cartoon risks and the like; if the picture is a real picture, there is no risk, so the classification model can be used to distinguish the picture as a cartoon picture or a real picture as the underlying model. Some classical classification algorithms, such as ResNet, VGG, etc., can recognize images as either cartoon pictures or real pictures.
The bottom layer model is built by extracting the bottom layer label and abstracting the bottom layer label into a technical solution, and the bottom layer model is the general auditing model. For a particular risk type, the features of that type may not be included in a generic audit tag at all; otherwise, if the result of the general auditing model contains a general auditing label, and the risk type corresponding to one risk auditing model has no characteristics of the label, the material can be directly judged that the picture to be audited has no corresponding risk without being identified by the risk auditing model, so that the time for auditing the picture is reduced as a whole.
S203, fine-tuning at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model, and obtaining at least one fine-tuned risk sub-model corresponding to the picture to be audited in the risk audit model.
In this step, the electronic device may perform fine adjustment on at least one risk sub-model corresponding to the picture to be audited in the risk audit model according to the audit result output by each general sub-model, so as to obtain at least one risk sub-model corresponding to the picture to be audited after fine adjustment in the risk audit model. In particular, generic audit model output specific tags may be used as a fine-tuning of the output results of the risk audit model. When the final risk identification result is obtained, the main method at the present stage is to compare the confidence coefficient of the specific label in the result of the auditing model with a preset threshold value so as to judge whether the risk exists. However, due to the problems of model accuracy and the like, sometimes the judgment result is wrong because the confidence coefficient is insufficient or exceeds a threshold value, and finally the judgment result is leaked or killed by mistake at a service end. However, the labels of some general auditing models and the labels of some risk auditing models often have strong correlation, and based on the relationship of the strong correlation, the embodiment of the application can finely adjust the result of the auditing risk models according to the result output by the general auditing models, and finally obtain the risk identification result together, thereby improving the accuracy of the auditing system. Besides strong correlation between the labels of the general auditing model and the labels of some risk auditing models, the occurrence of some label combinations can be mutually exclusive and simultaneously have special situations such as calculating risks; when such a combination of tags occurs, the post-processing of the results, such as confidence comparisons, may often be omitted, thereby giving a final risk identification result.
S204, inputting the picture to be audited into each risk sub-model in the corresponding at least one risk sub-model, and obtaining an audit result output by each risk sub-model.
S205, judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
According to the picture auditing method provided by the embodiment of the application, firstly, a picture to be audited is input into each general sub-model in the general auditing model, and auditing results output by each general sub-model are obtained; then, according to the auditing results output by the general sub-models, determining at least one risk sub-model corresponding to the picture to be audited in the risk auditing model; inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an auditing result output by each risk sub-model; and finally, judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model. That is, the application can audit the pictures based on the model of the hierarchical structure, and takes the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, the reasonably used auditing risk models can be screened out, so that the number of auditing risk models can be reduced, and auditing time is further shortened. In the existing picture auditing method, one picture needs to sequentially pass through a plurality of risk models, and then all calculation results of the risk models are summarized and audited to obtain a final risk identification result. Because the application adopts the technical means of auditing the pictures based on the model of the hierarchical structure, the technical problems that the auditing efficiency and the accuracy rate are often not compatible in the prior art, so that the auditing efficiency is reduced, and the confidence of the output problem label is not high or too high, so that the risk picture leaks out or the risk-free picture is killed by mistake, and the risk is caused to the online business are solved; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example III
Fig. 3 is a third flow chart of a picture auditing method according to an embodiment of the present application. Further optimization and expansion based on the above technical solution can be combined with the above various alternative embodiments. As shown in fig. 3, the picture auditing method may include the steps of:
s301, inputting a picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; wherein the generic audit model includes at least one generic sub-model.
S302, determining at least one risk sub-model corresponding to a picture to be audited in a risk audit model according to audit results output by all the general sub-models; wherein the risk audit model includes at least one risk sub-model.
S303, inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an audit result output by each risk sub-model.
S304, if at least two auditing results in the auditing results output by each risk sub-model meet preset merging conditions, merging the at least two auditing results in the auditing results output by each risk sub-model to obtain merged auditing results.
In this step, if at least two audit results in the audit results output by each risk sub-model meet a preset merging condition, the electronic device may merge at least two audit results in the audit results output by each risk sub-model, so as to obtain a merged audit result. For example, "there is a risk of a low colloquial" is equivalent to "there is a risk of a low colloquial", "there is no risk of sensitive content", and "there is no risk of a text", so the present application can combine "there is a risk of a low colloquial", "there is no risk of sensitive content", and "there is no risk of a text" as "there is a risk of a low colloquial", which is taken as the final auditing result of the picture to be audited.
S305, judging the final auditing result of the picture to be audited based on the combined auditing result.
Fig. 4 is a schematic structural diagram of a picture auditing framework according to an embodiment of the present application. As shown in fig. 4, the audit framework may include: the system comprises a picture input unit, a general model unit, a risk auditing unit and a risk judging unit; wherein 1) picture input unit: acquiring pictures from local picture stock or network environment according to the requirement of an auditing system; 2) General model unit: summarizing the bottom layer labels which are helpful for auditing for the characteristics of all risk types; identifying at this unit; 3) Risk auditing unit: filtering risk auditing models which are not required to be used according to the result of the universal model unit, wherein subsequent pictures do not need to pass through the models; the rest risk auditing model audits the picture in the unit; 4) Risk judging unit: and carrying out risk identification according to the result of each risk auditing model in the risk auditing unit.
The auditing flow provided by the application uses a hierarchical structure, wherein the bottom layer is a general auditing model, and the high layer is a risk auditing model; the number of risk audit models used subsequently is reduced through the prepositive judgment of the general audit model, so that audit time is reduced under the condition of ensuring accuracy. In addition, the result of the general audit model in the present application also acts as a risk determination unit. Because of some problems caused by labels and confidence coefficient output by the risk auditing model, the result of the general auditing model can be used together in a risk judging flow, so that the auditing result can be judged in an auxiliary way, and the auditing accuracy is improved.
According to the technical scheme provided by the application, the risk auditing flow is unfolded according to the hierarchical structure, and the number of risk auditing models used in auditing can be reduced based on the hierarchical structure, so that auditing efficiency is improved. Based on the auditing flow of the hierarchical structure, the output results of each hierarchical model are commonly used in the risk judging stage, so that auditing accuracy can be indirectly improved.
According to the picture auditing method provided by the embodiment of the application, firstly, a picture to be audited is input into each general sub-model in the general auditing model, and auditing results output by each general sub-model are obtained; then, according to the auditing results output by the general sub-models, determining at least one risk sub-model corresponding to the picture to be audited in the risk auditing model; inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an auditing result output by each risk sub-model; and finally, judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model. That is, the application can audit the pictures based on the model of the hierarchical structure, and takes the general audit model as the bottom model; taking the risk auditing model as a high-level model; through the design of the hierarchical structure, the reasonably used auditing risk models can be screened out, so that the number of auditing risk models can be reduced, and auditing time is further shortened. In the existing picture auditing method, one picture needs to sequentially pass through a plurality of risk models, and then all calculation results of the risk models are summarized and audited to obtain a final risk identification result. Because the application adopts the technical means of auditing the pictures based on the model of the hierarchical structure, the technical problems that the auditing efficiency and the accuracy rate are often not compatible in the prior art, so that the auditing efficiency is reduced, and the confidence of the output problem label is not high or too high, so that the risk picture leaks out or the risk-free picture is killed by mistake, and the risk is caused to the online business are solved; in addition, the technical scheme of the embodiment of the application is simple and convenient to realize, convenient to popularize and wider in application range.
Example IV
Fig. 5 is a schematic structural diagram of a picture auditing device according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 includes: a general audit module 501, a determination model 502, a risk audit module 503, and a decision module 504; wherein,,
the general auditing module 501 is configured to input a picture to be audited into each general sub-model in the general auditing model, so as to obtain an auditing result output by each general sub-model; wherein the generic audit model comprises at least one generic sub-model;
the determining module 502 is configured to determine, according to an audit result output by each general sub-model, at least one risk sub-model corresponding to the picture to be audited in the risk audit model; wherein the risk audit model includes at least one risk sub-model;
the risk auditing module 503 is configured to input the picture to be audited into each risk sub-model in the at least one risk sub-model, so as to obtain an auditing result output by each risk sub-model;
the determining module 504 is configured to determine a final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
Further, the generic audit model includes, but is not limited to, at least one of the following generic sub-models: cartoon/real submodel, character detection submodel, human attribute submodel, and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: cartoon low-custom recognition submodel, real person low-custom recognition submodel, picture sensitive marker submodel, picture plain text submodel, false exaggeration submodel, sensitive character recognition submodel, character low-custom recognition submodel.
Further, the determining module 502 is specifically configured to determine that, if at least one of the general audit sub-models detects that the person information exists in the picture to be audited, a risk sub-model corresponding to the picture to be audited in the risk audit model is a model associated with the person information.
Further, the determining module 502 is specifically configured to determine that, if at least one of the general purpose sub-models detects that text information exists in the image to be inspected, a risk sub-model corresponding to the image to be inspected in the risk inspection model is a model associated with the text information.
Further, the determining module 502 is specifically configured to determine that, if at least one of the general audit sub-models detects that the picture to be audited is a cartoon picture or a real picture, a risk sub-model corresponding to the picture to be audited in the risk audit model is a model associated with the cartoon picture or a model associated with the real picture.
Further, the risk auditing module 503 is further configured to fine tune at least one risk sub-model corresponding to the picture to be audited in the risk auditing model according to the auditing result output by each general sub-model, so as to obtain at least one fine-tuned risk sub-model corresponding to the picture to be audited in the risk auditing model; and executing the operation of inputting the picture to be audited into each risk sub-model in the at least one risk sub-model.
Further, the determining module 504 is specifically configured to combine at least two audit results in the audit results output by each risk sub-model to obtain a combined audit result if at least two audit results in the audit results output by each risk sub-model meet a preset combination condition; and judging the final auditing result of the picture to be audited based on the combined auditing result.
The picture auditing device can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details which are not described in detail in the present embodiment can be referred to the picture auditing method provided in any embodiment of the present application.
Example five
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, such as a picture review method. For example, in some embodiments, the picture auditing method may be implemented as a computer software program that is tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the picture review method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the picture auditing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (10)
1. A picture auditing method, the method comprising:
inputting the picture to be audited into each general sub-model in the general audit model to obtain an audit result output by each general sub-model; wherein the generic audit model comprises at least one generic sub-model;
if at least one of the universal sub-models detects that character information exists in the picture to be checked, determining that a risk sub-model corresponding to the picture to be checked in a risk checking model is a model associated with the character information; if at least one of the universal sub-models detects that text information exists in the image to be checked, determining that a risk sub-model corresponding to the image to be checked in the risk checking model is a model associated with the text information; if at least one general sub-model in the general sub-models detects that the picture to be audited is a cartoon picture or a real picture, determining that a risk sub-model corresponding to the picture to be audited in the risk audit model is a model associated with the cartoon picture or a model associated with the real picture; wherein the risk audit model includes at least one risk sub-model;
Inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain an audit result output by each risk sub-model;
and judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
2. A method according to claim 1, the generic audit model including, but not limited to, at least one of the following generic sub-models: cartoon/real submodel, character detection submodel, human attribute submodel, and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: cartoon low-custom recognition submodel, real person low-custom recognition submodel, picture sensitive marker submodel, picture plain text submodel, false exaggeration submodel, sensitive character recognition submodel, character low-custom recognition submodel.
3. The method of claim 1, prior to said inputting the picture to be audited into each of the at least one risk sub-models, the method further comprising:
according to the auditing results output by the general sub-models, fine tuning is carried out on at least one risk sub-model corresponding to the picture to be audited in the risk auditing model, and at least one fine-tuned risk sub-model corresponding to the picture to be audited in the risk auditing model is obtained; and executing the operation of inputting the picture to be audited into each risk sub-model in the at least one risk sub-model.
4. The method according to claim 1, wherein the determining the final auditing result of the picture to be audited based on the auditing result output by each risk sub-model includes:
if at least two auditing results in the auditing results output by each risk sub-model meet preset merging conditions, merging the at least two auditing results in the auditing results output by each risk sub-model to obtain merged auditing results;
and judging the final auditing result of the picture to be audited based on the combined auditing result.
5. A picture review device, the device comprising: the system comprises a general auditing module, a determining module, a risk auditing module and a judging module; wherein,,
the general auditing module is used for inputting the picture to be audited into each general sub-model in the general auditing model to obtain the auditing result output by each general sub-model; wherein the generic audit model comprises at least one generic sub-model;
the determining module is configured to determine, if at least one of the universal sub-models detects that there is character information in the picture to be checked, that a risk sub-model corresponding to the picture to be checked in a risk checking model is a model associated with the character information; if at least one of the universal sub-models detects that text information exists in the image to be checked, determining that a risk sub-model corresponding to the image to be checked in the risk checking model is a model associated with the text information; if at least one general sub-model in the general sub-models detects that the picture to be audited is a cartoon picture or a real picture, determining that a risk sub-model corresponding to the picture to be audited in the risk audit model is a model associated with the cartoon picture or a model associated with the real picture; wherein the risk audit model includes at least one risk sub-model;
The risk auditing module is used for inputting the picture to be audited into each risk sub-model in the at least one risk sub-model to obtain auditing results output by each risk sub-model;
and the judging module is used for judging the final auditing result of the picture to be audited based on the auditing results output by each risk sub-model.
6. An apparatus according to claim 5, the generic audit model including, but not limited to, at least one of the following generic sub-models: cartoon/real submodel, character detection submodel, human attribute submodel, and character recognition submodel; the risk audit model includes, but is not limited to, at least one of the following risk sub-models: cartoon low-custom recognition submodel, real person low-custom recognition submodel, picture sensitive marker submodel, picture plain text submodel, false exaggeration submodel, sensitive character recognition submodel, character low-custom recognition submodel.
7. The apparatus of claim 5, the risk auditing module further configured to fine tune at least one risk sub-model corresponding to the picture to be audited in the risk auditing model according to an auditing result output by each general sub-model, so as to obtain at least one fine-tuned risk sub-model corresponding to the picture to be audited in the risk auditing model; and executing the operation of inputting the picture to be audited into each risk sub-model in the at least one risk sub-model.
8. The apparatus of claim 5, wherein the determining module is specifically configured to combine at least two audit results in audit results output by each risk sub-model to obtain a combined audit result if at least two audit results in audit results output by each risk sub-model meet a preset combination condition; and judging the final auditing result of the picture to be audited based on the combined auditing result.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110287468.3A CN113010721B (en) | 2021-03-17 | 2021-03-17 | Picture auditing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110287468.3A CN113010721B (en) | 2021-03-17 | 2021-03-17 | Picture auditing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113010721A CN113010721A (en) | 2021-06-22 |
CN113010721B true CN113010721B (en) | 2023-10-10 |
Family
ID=76409383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110287468.3A Active CN113010721B (en) | 2021-03-17 | 2021-03-17 | Picture auditing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113010721B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114881615A (en) * | 2022-06-06 | 2022-08-09 | 山东浪潮爱购云链信息科技有限公司 | Forum data interaction method and device in purchasing platform |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408669A (en) * | 2018-09-29 | 2019-03-01 | 北京数美时代科技有限公司 | A kind of content auditing method and device for different application scene |
CN110766033A (en) * | 2019-05-21 | 2020-02-07 | 北京嘀嘀无限科技发展有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111090776A (en) * | 2019-12-20 | 2020-05-01 | 广州市百果园信息技术有限公司 | Video auditing method, device, auditing server and storage medium |
CN111385602A (en) * | 2018-12-29 | 2020-07-07 | 广州市百果园信息技术有限公司 | Video auditing method, medium and computer equipment based on multi-level and multi-model |
CN111382291A (en) * | 2020-03-12 | 2020-07-07 | 北京金山云网络技术有限公司 | Machine auditing method and device and machine auditing server |
CN111967493A (en) * | 2020-06-29 | 2020-11-20 | 北京百度网讯科技有限公司 | Image auditing method and device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180365687A1 (en) * | 2013-06-30 | 2018-12-20 | EMC IP Holding Company LLC | Fraud detection |
US10613707B2 (en) * | 2015-12-10 | 2020-04-07 | International Business Machines Corporation | Auditing icons via image recognition to provide individualized assets to software project teams |
-
2021
- 2021-03-17 CN CN202110287468.3A patent/CN113010721B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408669A (en) * | 2018-09-29 | 2019-03-01 | 北京数美时代科技有限公司 | A kind of content auditing method and device for different application scene |
CN111385602A (en) * | 2018-12-29 | 2020-07-07 | 广州市百果园信息技术有限公司 | Video auditing method, medium and computer equipment based on multi-level and multi-model |
CN110766033A (en) * | 2019-05-21 | 2020-02-07 | 北京嘀嘀无限科技发展有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111090776A (en) * | 2019-12-20 | 2020-05-01 | 广州市百果园信息技术有限公司 | Video auditing method, device, auditing server and storage medium |
CN111382291A (en) * | 2020-03-12 | 2020-07-07 | 北京金山云网络技术有限公司 | Machine auditing method and device and machine auditing server |
CN111967493A (en) * | 2020-06-29 | 2020-11-20 | 北京百度网讯科技有限公司 | Image auditing method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
人工智能在交通违法证据审核中的应用;曾强;;信息记录材料(04);128-129 * |
通过人工智能实现内容智能审核及在世界杯的实战;张建军;孙滔;孟方;;现代电视技术(08);45-47+138 * |
Also Published As
Publication number | Publication date |
---|---|
CN113010721A (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240304015A1 (en) | Method of training deep learning model for text detection and text detection method | |
US12118770B2 (en) | Image recognition method and apparatus, electronic device and readable storage medium | |
CN112989235B (en) | Knowledge base-based inner link construction method, device, equipment and storage medium | |
US11881050B2 (en) | Method for detecting face synthetic image, electronic device, and storage medium | |
CN113657483A (en) | Model training method, target detection method, device, equipment and storage medium | |
CN112560481B (en) | Statement processing method, device and storage medium | |
CN113378855A (en) | Method for processing multitask, related device and computer program product | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
CN118365990B (en) | Model training method and device applied to contraband detection and electronic equipment | |
CN113947700A (en) | Model determination method and device, electronic equipment and memory | |
CN113010721B (en) | Picture auditing method and device, electronic equipment and storage medium | |
CN114924959A (en) | Page testing method and device, electronic equipment and medium | |
CN117743577A (en) | Text classification method, device, electronic equipment and storage medium | |
CN113902899A (en) | Training method, target detection method, device, electronic device and storage medium | |
CN113379592A (en) | Method and device for processing sensitive area in picture and electronic equipment | |
CN116385789A (en) | Image processing method, training device, electronic equipment and storage medium | |
CN113033431B (en) | Optical character recognition model training and recognition method, device, equipment and medium | |
CN112818972B (en) | Method and device for detecting interest point image, electronic equipment and storage medium | |
CN114973333A (en) | Human interaction detection method, human interaction detection device, human interaction detection equipment and storage medium | |
CN114912541A (en) | Classification method, classification device, electronic equipment and storage medium | |
CN113901901A (en) | Training method and device of image processing model, electronic equipment and medium | |
CN114492364A (en) | Same vulnerability judgment method, device, equipment and storage medium | |
CN113947771A (en) | Image recognition method, apparatus, device, storage medium, and program product | |
CN113238765B (en) | Method, device, equipment and storage medium for distributing small program | |
CN114299522B (en) | Image recognition method device, apparatus and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |