CN113780723A - Method, system and equipment for evaluating quality inspection of multimedia content - Google Patents
Method, system and equipment for evaluating quality inspection of multimedia content Download PDFInfo
- Publication number
- CN113780723A CN113780723A CN202110879702.1A CN202110879702A CN113780723A CN 113780723 A CN113780723 A CN 113780723A CN 202110879702 A CN202110879702 A CN 202110879702A CN 113780723 A CN113780723 A CN 113780723A
- Authority
- CN
- China
- Prior art keywords
- evaluation
- audit
- task
- content
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000007689 inspection Methods 0.000 title claims abstract description 29
- 238000011156 evaluation Methods 0.000 claims abstract description 180
- 238000012550 audit Methods 0.000 claims abstract description 98
- 238000004519 manufacturing process Methods 0.000 claims abstract description 14
- 238000012544 monitoring process Methods 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims description 21
- 208000012260 Accidental injury Diseases 0.000 claims description 13
- 208000014674 injury Diseases 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012372 quality testing Methods 0.000 abstract description 4
- 238000005070 sampling Methods 0.000 abstract description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011157 data evaluation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06395—Quality analysis or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Educational Administration (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Primary Health Care (AREA)
- General Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method, a system and equipment for evaluating and quality-testing multimedia contents. Firstly, establishing an evaluation task according to task configuration, wherein the task configuration comprises an evaluation service party, and the time range, the quantity and the condition of sampling inspection; then sending the created evaluation task to a message queue; by monitoring the message queue, starting process production data when a task is found in the message queue, extracting content meeting the task configuration information from the full data or the audit record of the evaluation business party according to the task configuration information, and recording the snapshot content of the evaluation business party, the audit state of the last time of audit and the number of the evaluation business party in an evaluation table; and after the evaluators finish the evaluation task, recording the auditing state and generating an evaluation report. According to the invention, by creating evaluation tasks with different dimensions, the evaluation efficiency can be improved, the auditing quality can be truly and effectively reflected, and the overall safety condition of the platform content can be comprehensively reflected.
Description
Technical Field
The invention relates to a method, a system and equipment for evaluating quality inspection of multimedia content, belonging to the field of internet application.
Background
In the internet content community product, it is increasingly important to ensure the compliance of online multimedia content (including text, pictures, videos, etc.). Once the illegal contents are spread out, the platform faces serious supervision penalties, user complaints, public opinion risks, and the like.
At present, there are many solutions for content audit, but the evaluation quality inspection scheme for the audit effect is single, and the traditional scheme is to manually derive part of the content from the audited content according to the dimension of the auditor to perform secondary audit to evaluate the accuracy of quality inspection audit, and this way has the following problems: firstly, the dimensionality is single, the accidental injury condition of an auditor can be only checked, but the omission condition is difficult to check; secondly, the quality inspection result is retained for a long time and is difficult to view, and the working condition of quality inspection personnel cannot be ensured; thirdly, for content products which can be updated by users in real time, the extracted real-time content cannot truly and effectively reflect the quality of audit; and fourthly, quality inspection cannot be carried out on the content which is inspected by the machine at the same time, and the whole safety condition of the platform content cannot be accurately reflected.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention aims to provide a method, a system and equipment for evaluating quality inspection of multimedia contents, which improve the evaluation efficiency and the accuracy of evaluation auditors by creating evaluation tasks with different dimensions, truly and effectively reflect the quality of audit, and comprehensively reflect the overall safety condition of platform contents.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a method for evaluating quality inspection of multimedia content, comprising the following steps:
establishing an evaluation task according to task configuration; the task configuration comprises the time range, the quantity and the conditions of the evaluation business party and the spot check; wherein the condition of the spot check at least comprises an evaluation type, and the evaluation type is full evaluation or audited evaluation;
sending the created evaluation task to a message queue;
monitoring a message queue, starting process production data when a task is found in the message queue, specifically extracting content meeting task configuration information from the total data of an evaluation business party or an audit record according to the task configuration information, and recording the content in an evaluation table for an evaluator to use; the evaluation table records the snapshot content of the evaluation service party, the audit state when the snapshot is audited last time and the number of the evaluation service party;
and after the evaluators complete the evaluation task, recording the audit state, and generating an evaluation report by comparing the audit state with the audit state of the evaluators which is audited last time.
In particular embodiments, the evaluation business is an article, a user, a video, or a review.
Preferably, the conditions of the spot check further include one or more of a designated service number, a designated keyword, a designated auditor and a designated audit state.
Preferably, when the evaluation type is full evaluation, the latest business content or the business content of the hit keyword is randomly extracted by configuring a mode of randomly extracting or appointing the keyword; when the evaluation type is the audited evaluation, auditors are appointed and/or the auditing state mode is adopted, and audited business content is extracted; wherein the audit status is pass or fail.
Preferably, the data is extracted from the corresponding extraction source at a fixed speed by the scanner according to the task configuration and is sent to the evaluation generator when the data is produced; the evaluation generator collects the data extracted by the scanner and writes the related content into the evaluation table.
Preferably, the accuracy rate, the leakage rate and the accidental injury rate of the quality evaluation are calculated by comparing the audit state of the evaluator with the first audit state of the audit of the evaluator or the machine, wherein the accuracy rate is the number/the total evaluation number of the first audit state and the audit state of the evaluator; the leakage rate is the number of the first audit state passing and the audit state of the evaluators failing/the total number of the evaluators; the accidental injury rate is the number of the first audit state as not passed and the audit state of the evaluator as passed/the total number of evaluations.
Based on the same inventive concept, the invention provides a system for evaluating quality inspection of multimedia content, which comprises:
the task creating module is used for creating an evaluation task according to task configuration; the task configuration comprises the time range, the quantity and the conditions of the evaluation business party and the spot check; wherein the condition of the spot check at least comprises an evaluation type, and the evaluation type is full evaluation or audited evaluation; and sending the created evaluation task to a message queue;
the evaluation data production module is used for monitoring the message queue, starting process production data when a task is found in the message queue, specifically extracting the content meeting the task configuration information from the full data of the evaluation business party or the audit record according to the task configuration information, and recording the content in the evaluation table for an evaluator to use; the evaluation table records the snapshot content of the evaluation service party, the audit state when the snapshot is audited last time and the number of the evaluation service party;
and the evaluation module is used for recording the audit state after the evaluators finish the evaluation task, and generating an evaluation report by comparing the audit state with the audit state of the evaluators which is checked last time.
Preferably, when the evaluation type is full evaluation, the evaluation data production module randomly extracts the latest business content or extracts the business content of the hit keyword according to the configuration information; when the evaluation type is the audited evaluation, auditors are appointed and/or the auditing state mode is adopted, and audited business content is extracted; wherein the audit status is pass or fail.
Preferably, the evaluation module calculates the accuracy rate, the leakage rate and the accidental injury rate of the quality inspection by comparing the audit state of the evaluator with the first audit state of the audit of the evaluator or the machine, wherein the accuracy rate is the number/the total evaluation number of the first audit state and the audit state of the evaluator; the leakage rate is the number of the first audit state passing and the audit state of the evaluators failing/the total number of the evaluators; the accidental injury rate is the number of the first audit state as not passed and the audit state of the evaluator as passed/the total number of evaluations.
Based on the same inventive concept, the invention provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the method for evaluating the quality inspection of the multimedia content when being loaded to the processor.
Has the advantages that: compared with the prior art, the invention has the following advantages:
1) the method and the device solve the problem of single dimension in the traditional quality inspection scheme, can create the evaluation task in a user-defined mode, and improve the evaluation efficiency. By means of tasks with different dimensions, the accuracy of auditors can be evaluated in high quality, and meanwhile the quality of the auditors is guaranteed.
2) The invention divides the evaluation into total evaluation and examined evaluation, can simultaneously and accurately evaluate the accuracy (accidental injury rate and leakage rate) of manual examination and machine examination by a random extraction technology, and can comprehensively reflect the safety problem of the platform content.
3) The invention extracts the snapshot content from the audit record, solves the problem of quality inspection of the real-time updated content product, and can save the content snapshot audit result updated each time by the user and randomly extract the content snapshot audit result.
4) Based on the evaluation task of the invention, a quality inspection work report can be automatically generated, the quality inspection work effect can be reflected visually, and the validity of the quality inspection work can be ensured.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
FIG. 2 is a screenshot of a statistics page in an embodiment of the invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings and specific embodiments.
In order to comprehensively and accurately reflect the problem of safety compliance of platform content, as shown in fig. 1, the embodiment of the invention discloses a method for evaluating quality inspection of multimedia content, wherein mysql and redis used as main middleware, the implementation language is golang, and the specific steps are as follows:
s1: an evaluation task is created according to the task configuration. The task configuration mainly comprises an evaluation business party, a time range, a quantity, conditions and the like of spot check, and is specifically described as follows:
(1) and (4) evaluating a service party: the business party evaluated at this time mainly refers to articles, users, videos, comments and the like.
(2) Time horizon of spot check
(3) Number of spot checks
(4) Sampling inspection conditions are as follows: specifying an evaluation type, which is divided into a full-scale evaluation (large disk) and an evaluated evaluation (snapshot); or other conditions such as service numbers, keywords, auditors, audit states and the like are specified.
Taking the evaluation article as an example (taking the article service as a dimension in the following example), the sampling condition is illustrated.
Specifying a service number, for example: and if the article number is appointed, acquiring article details through the number, and generating an evaluation task.
Specifying keywords, for example: if the keyword is specified, the article is added to the evaluation task by sequentially scanning the form of the article library and assuming that the keyword is hit.
Designated auditors, for example: information about the relevant reviewers (including human or machine) is recorded for articles that have already been reviewed. The number of the auditor may be given. And finding the audit record related to the personnel by scanning the audit record table. And the article number and the current auditing state (pass/fail) are saved, and the information is added into the evaluation task.
The audit status is specified, for example: and if the audit state is appointed, acquiring article information corresponding to the audit state by scanning an audit record table and adding the article information into the evaluation task.
Specifying the type of assessment, full assessment: randomly extracting articles from a full article library to generate an evaluation task; evaluated: and randomly extracting articles from the article records which are checked, attaching the checking state of the articles, and generating an evaluation task.
In this embodiment, when the system is implemented, for article extraction and evaluation, the corresponding data structure is as follows:
`id`int(10)unsigned NOT NULL AUTO_INCREMENT COMMENT'id',
creator of generator _ id 'int (10) signaled NOT NULL DEFAULT'0 'measure',
"Assess _ states 'tinyint (3) NOT NULL DEFAULT'0'COMMENT' evaluates the task state: 0 is normal; 1 in creation; 2 has been deleted; 3 the creation fails',
"Remark ' varchar (256) NOT NULL DEFAULT" COMMENT ' evaluation notes ',
"create _ time ' int (10) signaled NOT NULL command ' creation time ',
' attr ' varchar (2048) NOT NULL DEFAULT ' COMMENT ' evaluative Attribute '
Evaluate attribute json samples:
{"is_snapshot":true,"reviewers":[5,78,305,320,328,329,331,332,334],"state":0,"s_time":1621327023,"e_time":1623919023,"total":100000}
wherein the "is _ snapshot" marks the evaluation type, and true is expressed as the reviewed evaluation; "reviewers" indicates the reviewer number associated with this extraction; "state" represents the audit state of extracting the original data of the article; "s _ time" and "e _ time" denote the extraction article start and end creation times, respectively.
S2: message queue monitoring task creation. After a task creator configures the foreground of the operation center and submits an evaluation task, the rear end receives the task and records the configuration of the task and the information of a specific operator in mysql, and simultaneously sends the main key id recorded by the current mysql to a redis message queue. The background listens to the redis message queue through the lpop instruction of the redis. Once the task queue is found to have data, S3 is entered to start a background process to configure the production data according to the evaluation recorded in mysql.
S3: production evaluation data. And according to the task configuration information, extracting the content meeting the task configuration information from the full data of the evaluation business party or the audit record, and recording the content in the evaluation table for the use of an evaluator. The specific production process comprises the following steps: firstly, extracting data from a corresponding extraction source at a fixed speed by a scanner according to task configuration and delivering the data to an evaluation generator; and collecting the data extracted by the scanner by the evaluation generator, and writing the related content into an evaluation table.
Specifically, the scanner extracts the relevant data sources from the extraction sources according to the task configuration. Scanner data structure:
where da represents the database instance, cond represents the extraction condition, batchsize represents the number of each extraction, firsttid represents the extraction start id, curid represents the position where the currently extracted cursor value is stored to be currently scanned, and lastid represents the extraction end id.
The scanner has two methods, Next () and Load (). Next () is mainly used to compare the scan cursor value with the extraction end id. If the scanning cursor value is greater than the extraction end id, the scanning process is terminated; otherwise, the Load () method is called. The main function of Load () is to extract data from mysql data source to the evaluation generator according to the extraction condition of loop.
Evaluating the producer data structure:
wherein, the assessID represents the number of the evaluation task, record is the extraction source when the evaluation type is snapshot, content is the extraction source when the evaluation type is large disc, scanner represents the scanner to extract data from the extraction source at a fixed speed, done represents whether the process is finished or not, and is used for notifying the process to exit. running represents the generator running state for interrupting the current operation. mu is a concurrency lock, and a user realizes a multi-scanner concurrency safe operation database.
The evaluation generator comprises methods such as write (), buildstate (), writeStat (), importsnaphits (), importsnapnotes (), importRandomArticles (), importbykeys (), and the like. The importSnapshoots () method is applicable to snapshot evaluation (i.e., audited evaluation), extracts the source for service audit records and sends the record to the downstream to call the write () method. The importrandomarticules () method is suitable for large-disk evaluation (namely, full-volume evaluation), and can randomly extract service contents and send the service contents to the downstream to call the write () method according to dimensions such as time, machine service/personnel, channels, content transmission volume (reading volume, forwarding volume and the like). The importByKeywords () method is applicable to keyword extraction evaluation. The extraction logic is to extract the content meeting the conditions according to the screening conditions of time lines, states and the like, match the selected keywords one by one, and collect the content and send the content to the downstream if the keywords are hit. The write () method mainly collects data of the upstream (scanner), and records the data of the related content (article snapshot content, the review state of the article when being reviewed last time, article content number) in the evaluation table for the evaluators to perform evaluation tasks. The buildst () method is used for initializing an evaluation report when a task is generated, and is used for recording the original data of the task: the method mainly records partial data of an original auditing state in snapshot evaluation, namely how much of the original state of the task is forbidden (not passed) and how much of the original state of the task is passed. The writeStat () method records how much of the evaluation operating state, i.e., how much of the evaluation is disabled (not passed) and how much of the evaluation is passed, while the reviewer performs the evaluation task.
S4: after the data production is finished, the evaluator can perform the evaluation task through the evaluation background. After the task is completed, the statistical result can be checked on the data statistics page. As shown in fig. 2, in the statistical report of article evaluation report data, the first behavior is the original article audit data, and the second behavior is the total amount of modification data, which refers to the operation of this data evaluation. These two rows can be contrasted against the miss-and-miss situation and all data can be clicked-through to see the detailed content. In addition, a chart can be displayed for a certain evaluation task, a curve graph of the working condition of quality testing personnel can be tracked and evaluated, the evaluation number of the quality testing personnel is uniformly increased along with the time and belongs to a normal condition, and if the evaluation magnitude is not changed along with the increase of the time, the working condition of the quality testing personnel is abnormal.
The accuracy, the leakage rate and the accidental injury rate of the evaluation can be obtained by comparing the audit record (the first audit state) during the grabbing with the audit record of the operation of the final evaluator. For example, the accuracy can be obtained by the number/total evaluation number of the first audit state and the audit state of the evaluator; the leakage rate can be obtained by the number of the first audit state passing and the audit state of the evaluators failing/the total evaluation number; the accidental injury rate can be obtained by the number of the first audit states as non-pass and the audit states of the evaluators as pass/the total number of evaluations.
Based on the method, the quality of the auditing work can be evaluated through evaluation tasks of different dimensions, the safety problem of the platform content can be reflected, for example, an auditor A can be randomly extracted, and the accuracy and the accidental injury rate of the auditor A can be evaluated by checking the snapshot content of articles, such as 100 articles, which are audited within a period of time. 1000 pieces of online large disk contents can be randomly extracted within a period of time, and the overall leakage rate condition and the safety condition of the online environment can be seen. And quality inspection conditions can be combined, manual or machine detection omission is analyzed, data is further analyzed, auditing rules are perfected, and the overall safety of the platform is improved.
Based on the same inventive concept, the system for evaluating quality inspection of multimedia content provided by the embodiment of the invention comprises: the task creating module is used for creating an evaluation task according to task configuration; the task configuration comprises the time range, the quantity and the conditions of the evaluation business party and the spot check; the evaluation data production module is used for monitoring the message queue, starting process production data when a task is found in the message queue, specifically extracting the content meeting the task configuration information from the full data of the evaluation business party or the audit record according to the task configuration information, and recording the content in the evaluation table for an evaluator to use; and the evaluation module is used for recording the audit state after the evaluators finish the evaluation task, and generating an evaluation report by comparing the audit state with the audit state of the evaluators which is checked last time. For specific implementation of each module, reference is made to the above method embodiments, and details are not described here.
Based on the same inventive concept, the embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is loaded into the processor, the computer device implements the method for evaluating quality inspection of multimedia content.
Claims (10)
1. A method for evaluating quality of multimedia content, comprising the steps of:
establishing an evaluation task according to task configuration; the task configuration comprises the time range, the quantity and the conditions of the evaluation business party and the spot check; wherein the condition of the spot check at least comprises an evaluation type, and the evaluation type is full evaluation or audited evaluation;
sending the created evaluation task to a message queue;
monitoring a message queue, starting process production data when a task is found in the message queue, specifically extracting content meeting task configuration information from the total data of an evaluation business party or an audit record according to the task configuration information, and recording the content in an evaluation table for an evaluator to use; the evaluation table records the snapshot content of the evaluation service party, the audit state when the snapshot is audited last time and the number of the evaluation service party;
and after the evaluators complete the evaluation task, recording the audit state, and generating an evaluation report by comparing the audit state with the audit state of the evaluators which is audited last time.
2. The method of claim 1, wherein the evaluation service is an article, a user, a video or a comment.
3. The method of claim 1, wherein the condition of the spot check further comprises one or more of a designated service number, a designated keyword, a designated auditor, and a designated audit status.
4. The method of claim 3, wherein when the evaluation type is full evaluation, the latest service content is randomly extracted or the service content of the hit keyword is extracted by configuring a random extraction or keyword assignment manner; when the evaluation type is the audited evaluation, auditors are appointed and/or the auditing state mode is adopted, and audited business content is extracted; wherein the audit status is pass or fail.
5. The method of claim 1, wherein the data is produced by a scanner according to task configuration to extract data from the corresponding extraction source at a fixed rate to the evaluation generator; the evaluation generator collects the data extracted by the scanner and writes the related content into the evaluation table.
6. The method for evaluating the quality inspection of the multimedia content according to claim 1, wherein the accuracy, the leakage rate and the accidental injury rate of the evaluation quality inspection are calculated by comparing the audit state of the evaluator with the first audit state of the audit of the evaluator or the machine, wherein the accuracy is the number of the first audit state consistent with the audit state of the evaluator/the total number of evaluations; the leakage rate is the number of the first audit state passing and the audit state of the evaluators failing/the total number of the evaluators; the accidental injury rate is the number of the first audit state as not passed and the audit state of the evaluator as passed/the total number of evaluations.
7. A system for assessing quality of multimedia content, comprising:
the task creating module is used for creating an evaluation task according to task configuration; the task configuration comprises the time range, the quantity and the conditions of the evaluation business party and the spot check; wherein the condition of the spot check at least comprises an evaluation type, and the evaluation type is full evaluation or audited evaluation; and sending the created evaluation task to a message queue;
the evaluation data production module is used for monitoring the message queue, starting process production data when a task is found in the message queue, specifically extracting the content meeting the task configuration information from the full data of the evaluation business party or the audit record according to the task configuration information, and recording the content in the evaluation table for an evaluator to use; the evaluation table records the snapshot content of the evaluation service party, the audit state when the snapshot is audited last time and the number of the evaluation service party;
and the evaluation module is used for recording the audit state after the evaluators finish the evaluation task, and generating an evaluation report by comparing the audit state with the audit state of the evaluators which is checked last time.
8. The system for evaluating quality inspection of multimedia contents according to claim 7, wherein the evaluation data producing module randomly extracts the latest service contents or extracts the service contents hitting the keyword according to the configuration information when the evaluation type is full evaluation; when the evaluation type is the audited evaluation, auditors are appointed and/or the auditing state mode is adopted, and audited business content is extracted; wherein the audit status is pass or fail.
9. The system for evaluating the quality inspection of the multimedia content according to claim 7, wherein the evaluation module calculates the accuracy, the leakage rate and the accidental injury rate of the evaluation quality inspection by comparing the audit state of the evaluator with the first audit state of the audit of the evaluator or the machine, wherein the accuracy is the number of the first audit state and the audit state of the evaluator/the total evaluation number; the leakage rate is the number of the first audit state passing and the audit state of the evaluators failing/the total number of the evaluators; the accidental injury rate is the number of the first audit state as not passed and the audit state of the evaluator as passed/the total number of evaluations.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when loaded into the processor, implements the method of assessing the quality of multimedia content according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110879702.1A CN113780723A (en) | 2021-08-02 | 2021-08-02 | Method, system and equipment for evaluating quality inspection of multimedia content |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110879702.1A CN113780723A (en) | 2021-08-02 | 2021-08-02 | Method, system and equipment for evaluating quality inspection of multimedia content |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113780723A true CN113780723A (en) | 2021-12-10 |
Family
ID=78836537
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110879702.1A Pending CN113780723A (en) | 2021-08-02 | 2021-08-02 | Method, system and equipment for evaluating quality inspection of multimedia content |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113780723A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114090305A (en) * | 2022-01-19 | 2022-02-25 | 飞狐信息技术(天津)有限公司 | Business auditing method and device |
CN115577867A (en) * | 2022-12-09 | 2023-01-06 | 深圳海智创科技有限公司 | Method and system for creating spot check task, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191080A (en) * | 2018-09-17 | 2019-01-11 | 北京点网聚科技有限公司 | One quality testing method and device |
CN109284894A (en) * | 2018-08-10 | 2019-01-29 | 广州虎牙信息科技有限公司 | Picture examination method, apparatus, storage medium and computer equipment |
CN109783689A (en) * | 2018-12-28 | 2019-05-21 | 广州华多网络科技有限公司 | Information processing method, device and electronic equipment |
CN113157699A (en) * | 2021-04-25 | 2021-07-23 | 上海淇玥信息技术有限公司 | Business data auditing method and device and electronic equipment |
-
2021
- 2021-08-02 CN CN202110879702.1A patent/CN113780723A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109284894A (en) * | 2018-08-10 | 2019-01-29 | 广州虎牙信息科技有限公司 | Picture examination method, apparatus, storage medium and computer equipment |
CN109191080A (en) * | 2018-09-17 | 2019-01-11 | 北京点网聚科技有限公司 | One quality testing method and device |
CN109783689A (en) * | 2018-12-28 | 2019-05-21 | 广州华多网络科技有限公司 | Information processing method, device and electronic equipment |
CN113157699A (en) * | 2021-04-25 | 2021-07-23 | 上海淇玥信息技术有限公司 | Business data auditing method and device and electronic equipment |
Non-Patent Citations (1)
Title |
---|
CTI论坛报道: "三汇座席质检系统", pages 2 - 3, Retrieved from the Internet <URL:http://www.ctiforum.com/factory/f08_08/www.synway.com/synway09_0702.htm> * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114090305A (en) * | 2022-01-19 | 2022-02-25 | 飞狐信息技术(天津)有限公司 | Business auditing method and device |
CN114090305B (en) * | 2022-01-19 | 2022-04-26 | 飞狐信息技术(天津)有限公司 | Business auditing method and device |
CN115577867A (en) * | 2022-12-09 | 2023-01-06 | 深圳海智创科技有限公司 | Method and system for creating spot check task, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Alali et al. | What's a typical commit? a characterization of open source software repositories | |
Knab et al. | Predicting defect densities in source code files with decision tree learners | |
US7792950B2 (en) | Coverage analysis of program code that accesses a database | |
US7493521B1 (en) | Apparatus and method for estimating the testing proficiency of a software test according to EMS messages extracted from a code base | |
CN113780723A (en) | Method, system and equipment for evaluating quality inspection of multimedia content | |
Swinnen et al. | A process deviation analysis–a case study | |
Tarhan et al. | Apply quantitative management now | |
Ang et al. | Revisiting the practical use of automated software fault localization techniques | |
Arnaoudova et al. | Physical and conceptual identifier dispersion: Measures and relation to fault proneness | |
CN115328784A (en) | Agile interface-oriented automatic testing method and system | |
CN112433948A (en) | Simulation test system and method based on network data analysis | |
Ghazarian | Characterization of functional software requirements space: The law of requirements taxonomic growth | |
McMaster et al. | Fault detection probability analysis for coverage-based test suite reduction | |
Akca et al. | Run-time measurement of cosmic functional size for java business applications: Initial results | |
Tripathi et al. | A controlled experiment to evaluate the effectiveness and the efficiency of four static program analysis tools for Java programs | |
Plosch et al. | On the relation between external software quality and static code analysis | |
Huang et al. | A method of bug report quality detection based on vector space model | |
Illes-Seifert et al. | Exploring the relationship of history characteristics and defect count: an empirical study | |
CN111179010A (en) | Online notarization method, system, device and medium for unreasonable price products | |
Pintér et al. | Integration of OLAP and data mining for analysis of results from dependability evaluation experiments | |
Chandorkar et al. | An exploratory study on the usage of gherkin features in open-source projects | |
CN111240882B (en) | Method and system for detecting abnormal state | |
Santos et al. | Experimental Evaluation of Automatic Tests Cases in Data Analytics Applications Loading Procedures | |
CN117194267B (en) | Software quality rating system based on cloud platform | |
Wang et al. | Domain invariant-based spreadsheet debugging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |