CN111915228A - High-reliability work platform task workload assessment method - Google Patents

High-reliability work platform task workload assessment method Download PDF

Info

Publication number
CN111915228A
CN111915228A CN202010852907.6A CN202010852907A CN111915228A CN 111915228 A CN111915228 A CN 111915228A CN 202010852907 A CN202010852907 A CN 202010852907A CN 111915228 A CN111915228 A CN 111915228A
Authority
CN
China
Prior art keywords
task
evaluation
multiplying
complexity
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010852907.6A
Other languages
Chinese (zh)
Inventor
王�琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Hollow Technology Co ltd
Original Assignee
Wuhan Hollow Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Hollow Technology Co ltd filed Critical Wuhan Hollow Technology Co ltd
Priority to CN202010852907.6A priority Critical patent/CN111915228A/en
Publication of CN111915228A publication Critical patent/CN111915228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a high-reliability work platform task workload assessment method, and relates to the field of workload assessment; in order to solve the problem of reliability evaluation; the method specifically comprises the following steps: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords; carrying out preliminary evaluation operation; and the task quantity evaluation module sends search information to the task benchmark base according to the classified task types, and the task benchmark base searches benchmark scores of the task templates corresponding to the types according to the information and feeds the benchmark scores back to the task quantity evaluation module. According to the invention, by setting the reference task library, the task templates corresponding to the multi-level keywords can be quickly retrieved, so that the comparison is quick, and the evaluation efficiency is improved.

Description

High-reliability work platform task workload assessment method
Technical Field
The invention relates to the technical field of workload assessment, in particular to a high-reliability workload assessment method for a task of a working platform.
Background
The work platform is an internet platform which provides various work management related services in a crowdsourcing mode, a packet issuing party issues work task requirements to the work platform, and the work platform needs to evaluate the workload of the work tasks so as to issue the tasks better and more fairly and determine corresponding commissions.
Through search, the chinese patent with patent application number CN201611227800.2 discloses a statistical method and device for user workload, wherein the method comprises: acquiring an article published by a target user in a preset time period, wherein the article is published by the target user on a public network platform; determining the quality score of each obtained article, wherein the quality score is used for representing the quality of the corresponding article; the total quality score of the article issued by the target user is calculated, the workload of the target user is determined according to the total quality score, and the workload cannot be reliably judged according to various factors by the statistical method, so that a larger statistical deviation is easily caused in the re-implementation process, and the user experience is influenced.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a high-reliability work platform task workload evaluation method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high-reliability work platform task workload assessment method comprises the following steps:
s1: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords;
s2: carrying out preliminary evaluation operation; the task quantity evaluation module sends search information to a task benchmark base according to the classified task types, the task benchmark base searches benchmark scores of corresponding type task templates according to the information and feeds the benchmark scores back to the task quantity evaluation module;
s3: carrying out integral evaluation operation; the task quantity evaluation module is used for respectively carrying out content quantity evaluation, complexity evaluation and similarity evaluation on the tasks according to the benchmark scores, and carrying out corresponding adjustment on the benchmark scores according to the evaluation results to obtain the overall task quantity scores;
s4: checking whether a task reference library is filled; uploading the task to a checking module, comparing the difference between the task and a task reference library by the checking module, selecting whether to enter the task reference library according to a comparison result, and if not, switching to S6;
s5: adding a task template; adding category numbers to tasks according to categories, screening multilevel keywords different from a task benchmark library, setting task quantity scores of the tasks as benchmark scores, binding the benchmark scores and the multilevel keywords, and storing the benchmark scores and the multilevel keywords into the task benchmark library to form a new task template;
s6: correcting the task quantity score; temporarily storing the task quantity score of the task, and correcting the task through actual completion time and completion degree after the task is completed to obtain a final task quantity score;
s7: and feeding back the result and recording the result into the system.
Preferably: the multi-level keywords are divided into at least three levels of keywords which are sequentially from one level to three levels according to priority sequence, and the category number is a 5-digit letter number.
Preferably: the evaluation items in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Preferably: the task score correction of the S6 specifically comprises the following steps: comparing the completion time with the reference time of the task template, and taking the reference time as a standard when the completion time is less than 1.2 times or more than 2 times of the reference time, or multiplying the integral task quantity score by a time difference compensation coefficient, wherein the time difference compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 70% to 100%.
Preferably: the multi-level keywords are divided into at least three levels of keywords which are sequentially from one level to three levels according to priority sequence, and the category number is an 8-bit letter number.
Preferably: the evaluation items in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 70-150%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 70% -150%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Preferably: the task score correction of the S6 specifically comprises the following steps: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 60% to 100%.
Preferably: the multi-level keywords are divided into at most four levels of keywords which are sequentially from one level to four levels according to the priority sequence, and the category number is a mixed number of 8 digits and letters.
Preferably: the evaluation items in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 10% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 10% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Preferably: the task score correction of the S6 specifically comprises the following steps: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 50% to 100%.
The invention has the beneficial effects that:
1. by setting the reference task library, task templates corresponding to the multi-level keywords can be quickly retrieved, so that the task templates are quickly compared, and the evaluation efficiency is improved;
2. the multi-direction evaluation is realized by carrying out content evaluation, complexity evaluation and similarity evaluation on the workload and the task template in the reference task library, so that errors are reduced, and the reliability is improved;
3. by auditing the tasks after the overall evaluation, comparing the differences between the tasks and the task reference library, and determining whether to enter the task reference library according to the differences, the task reference library can be enriched, and a good basis is provided for the accuracy and reliability of the evaluation;
4. through the correction of the task quantity score, the correction can be made according to the actual completion time and the completion degree, the influence caused by some external factors is overcome, the completion time is prevented from being maliciously delayed, and the user expansion is facilitated.
Drawings
Fig. 1 is a flowchart of a method for evaluating task workload of a highly reliable work platform according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example 1:
a high-reliability work platform task workload assessment method sequentially comprises the following steps:
s1: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords;
s2: carrying out preliminary evaluation operation; the task quantity evaluation module sends search information to a task benchmark base according to the classified task types, the task benchmark base searches benchmark scores of corresponding type task templates according to the information and feeds the benchmark scores back to the task quantity evaluation module;
s3: carrying out integral evaluation operation; the task quantity evaluation module is used for respectively carrying out content quantity evaluation, complexity evaluation and similarity evaluation on the tasks according to the benchmark scores, and carrying out corresponding adjustment on the benchmark scores according to the evaluation results to obtain the overall task quantity scores;
s4: checking whether a task reference library is filled; uploading the task to a checking module, comparing the difference between the task and a task reference library by the checking module, selecting whether to enter the task reference library according to a comparison result, and if not, switching to S6;
s5: adding a task template; adding category numbers to tasks according to categories, screening multilevel keywords different from a task benchmark library, setting task quantity scores of the tasks as benchmark scores, binding the benchmark scores and the multilevel keywords, and storing the benchmark scores and the multilevel keywords into the task benchmark library to form a new task template;
s6: correcting the task quantity score; temporarily storing the task quantity score of the task, and correcting the task through actual completion time and completion degree after the task is completed to obtain a final task quantity score;
s7: and feeding back the result and recording the result into the system.
The multi-level keywords are divided into at least three levels of keywords which are sequentially from one level to three levels according to priority sequence, and the category number is a 5-bit letter number.
Wherein the evaluations in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Wherein the task score correction of S6 is specifically: comparing the completion time with the reference time of the task template, and taking the reference time as a standard when the completion time is less than 1.2 times or more than 2 times of the reference time, or multiplying the integral task quantity score by a time difference compensation coefficient, wherein the time difference compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 70% to 100%.
Example 2:
a high-reliability work platform task workload assessment method sequentially comprises the following steps:
s1: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords;
s2: carrying out preliminary evaluation operation; the task quantity evaluation module sends search information to a task benchmark base according to the classified task types, the task benchmark base searches benchmark scores of corresponding type task templates according to the information and feeds the benchmark scores back to the task quantity evaluation module;
s3: carrying out integral evaluation operation; the task quantity evaluation module is used for respectively carrying out content quantity evaluation, complexity evaluation and similarity evaluation on the tasks according to the benchmark scores, and carrying out corresponding adjustment on the benchmark scores according to the evaluation results to obtain the overall task quantity scores;
s4: checking whether a task reference library is filled; uploading the task to a checking module, comparing the difference between the task and a task reference library by the checking module, selecting whether to enter the task reference library according to a comparison result, and if not, switching to S6;
s5: adding a task template; adding category numbers to tasks according to categories, screening multilevel keywords different from a task benchmark library, setting task quantity scores of the tasks as benchmark scores, binding the benchmark scores and the multilevel keywords, and storing the benchmark scores and the multilevel keywords into the task benchmark library to form a new task template;
s6: correcting the task quantity score; temporarily storing the task quantity score of the task, and correcting the task through actual completion time and completion degree after the task is completed to obtain a final task quantity score;
s7: and feeding back the result and recording the result into the system.
The multi-level keywords are divided into at least three levels of keywords which are sequentially from one level to three levels according to priority sequence, and the category number is an 8-bit letter number.
Wherein the evaluations in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 70-150%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 70% -150%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Wherein the task score correction of S6 is specifically: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 60% to 100%.
Example 3:
a high-reliability work platform task workload assessment method sequentially comprises the following steps:
s1: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords;
s2: carrying out preliminary evaluation operation; the task quantity evaluation module sends search information to a task benchmark base according to the classified task types, the task benchmark base searches benchmark scores of corresponding type task templates according to the information and feeds the benchmark scores back to the task quantity evaluation module;
s3: carrying out integral evaluation operation; the task quantity evaluation module is used for respectively carrying out content quantity evaluation, complexity evaluation and similarity evaluation on the tasks according to the benchmark scores, and carrying out corresponding adjustment on the benchmark scores according to the evaluation results to obtain the overall task quantity scores;
s4: checking whether a task reference library is filled; uploading the task to a checking module, comparing the difference between the task and a task reference library by the checking module, selecting whether to enter the task reference library according to a comparison result, and if not, switching to S6;
s5: adding a task template; adding category numbers to tasks according to categories, screening multilevel keywords different from a task benchmark library, setting task quantity scores of the tasks as benchmark scores, binding the benchmark scores and the multilevel keywords, and storing the benchmark scores and the multilevel keywords into the task benchmark library to form a new task template;
s6: correcting the task quantity score; temporarily storing the task quantity score of the task, and correcting the task through actual completion time and completion degree after the task is completed to obtain a final task quantity score;
s7: and feeding back the result and recording the result into the system.
The multi-level keywords are classified into at most four levels of keywords which are sequentially classified into one level to four levels according to priority, and the category number is a mixed number of 8 digits and letters.
Wherein the evaluations in S3 are specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 10% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 10% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
Wherein the task score correction of S6 is specifically: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 50% to 100%.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A high-reliability work platform task workload assessment method is characterized by comprising the following steps:
s1: starting a workload evaluation task; uploading the tasks to a working platform, sending the tasks to a task quantity evaluation module by the working platform, acquiring task information by the task quantity evaluation module, executing the task quantity evaluation, retrieving multi-level keywords according to the task information, and classifying the tasks through the corresponding multi-level keywords;
s2: carrying out preliminary evaluation operation; the task quantity evaluation module sends search information to a task benchmark base according to the classified task types, the task benchmark base searches benchmark scores of corresponding type task templates according to the information and feeds the benchmark scores back to the task quantity evaluation module;
s3: carrying out integral evaluation operation; the task quantity evaluation module is used for respectively carrying out content quantity evaluation, complexity evaluation and similarity evaluation on the tasks according to the benchmark scores, and carrying out corresponding adjustment on the benchmark scores according to the evaluation results to obtain the overall task quantity scores;
s4: checking whether a task reference library is filled; uploading the task to a checking module, comparing the difference between the task and a task reference library by the checking module, selecting whether to enter the task reference library according to a comparison result, and if not, switching to S6;
s5: adding a task template; adding category numbers to tasks according to categories, screening multilevel keywords different from a task benchmark library, setting task quantity scores of the tasks as benchmark scores, binding the benchmark scores and the multilevel keywords, and storing the benchmark scores and the multilevel keywords into the task benchmark library to form a new task template;
s6: correcting the task quantity score; temporarily storing the task quantity score of the task, and correcting the task through actual completion time and completion degree after the task is completed to obtain a final task quantity score;
s7: and feeding back the result and recording the result into the system.
2. The method for evaluating the task workload of the working platform with high reliability as claimed in claim 1, wherein the multi-level keywords are divided into at most three levels of keywords, which are sequentially from one level to three levels according to priority ordering, and the category number is a 5-digit letter number.
3. The method for evaluating task workload of a work platform with high reliability as claimed in claim 2, wherein the evaluation in S3 is specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
4. The method for evaluating the task workload of the working platform with high reliability according to claim 3, wherein the task score of the step S6 is modified specifically as follows: comparing the completion time with the reference time of the task template, and taking the reference time as a standard when the completion time is less than 1.2 times or more than 2 times of the reference time, or multiplying the integral task quantity score by a time difference compensation coefficient, wherein the time difference compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 70% to 100%.
5. The method for evaluating the task workload of the working platform with high reliability as claimed in claim 1, wherein the multi-level keywords are divided into at most three levels of keywords, which are sequentially from one level to three levels according to priority ordering, and the category number is an 8-bit letter number.
6. The method for evaluating task workload of a work platform with high reliability as claimed in claim 5, wherein the evaluation in S3 is specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 70-150%, wherein every 20% is a first grade; the multiplying power range of the complexity multiplying power is 70% -150%, wherein every 20% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
7. The method for evaluating the task workload of the work platform with high reliability according to claim 6, wherein the task score of the step S6 is modified specifically as follows: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 60% to 100%.
8. The method for evaluating the task workload of the working platform with high reliability as claimed in claim 1, wherein the multi-level keywords are classified into at most four levels of keywords which are sequentially ranked from one level to four levels according to priority, and the category number is a mixed number of 8 digits and letters.
9. The method for evaluating task workload of a work platform with high reliability as claimed in claim 8, wherein the evaluation in S3 is specifically: content evaluation: comparing the content of the task to be evaluated with the corresponding content of the task template to obtain a difference coefficient, and binding the difference coefficient with the closest content multiplying factor gear to obtain a content increasing value; and (3) complexity evaluation: comparing the complexity of the task to be evaluated with the complexity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest complexity multiplying factor gear to obtain a complexity amplification value; and (3) similarity evaluation: comparing the similarity of the task to be evaluated with the similarity of the corresponding task template to obtain a difference coefficient, and binding the difference coefficient with the closest similarity multiplying factor gear to obtain a similarity amplification value; the multiplying power range of the internal capacity multiplying power is 80-140%, wherein every 10% is a first grade; the multiplying power range of the complexity multiplying power is 80% -140%, wherein every 10% is a first grade; the multiplying power range of the similarity multiplying power is 80-140%, wherein every 10% is a first grade; and finally, multiplying the reference score by the content multiplying power, the complexity multiplying power and the similarity multiplying power in sequence to obtain the integral task score.
10. The method for evaluating the task workload of the work platform with high reliability according to claim 9, wherein the task score of S6 is modified specifically as follows: comparing the completion time with the reference time of the task template, taking the reference time as a standard when the completion time is less than 1.2 times of the reference time or more than 1.8 times of the reference time, otherwise, multiplying the integral task quantity score by a time compensation coefficient, wherein the time compensation coefficient is 1-1.2 and is in direct proportion to the time quantity; and multiplying the result by the time difference coefficient and then multiplying the result by the overall completion rate to obtain the final task workload score, wherein the overall completion rate ranges from 50% to 100%.
CN202010852907.6A 2020-08-22 2020-08-22 High-reliability work platform task workload assessment method Pending CN111915228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010852907.6A CN111915228A (en) 2020-08-22 2020-08-22 High-reliability work platform task workload assessment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010852907.6A CN111915228A (en) 2020-08-22 2020-08-22 High-reliability work platform task workload assessment method

Publications (1)

Publication Number Publication Date
CN111915228A true CN111915228A (en) 2020-11-10

Family

ID=73279298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010852907.6A Pending CN111915228A (en) 2020-08-22 2020-08-22 High-reliability work platform task workload assessment method

Country Status (1)

Country Link
CN (1) CN111915228A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732307A (en) * 2013-12-18 2015-06-24 北京神州泰岳软件股份有限公司 Project workload acquisition method and system
CN106485409A (en) * 2016-09-30 2017-03-08 上海斐讯数据通信技术有限公司 A kind of workload apparatus for evaluating and method
CN110264106A (en) * 2019-06-28 2019-09-20 浪潮卓数大数据产业发展有限公司 A kind of project work amount assessment system and method based on agile management exploitation
CN111124376A (en) * 2020-01-20 2020-05-08 众能联合数字技术有限公司 Project building system for cod-eCli scaffold
CN111507557A (en) * 2019-12-09 2020-08-07 武汉空心科技有限公司 Multi-role-based work platform task allocation method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732307A (en) * 2013-12-18 2015-06-24 北京神州泰岳软件股份有限公司 Project workload acquisition method and system
CN106485409A (en) * 2016-09-30 2017-03-08 上海斐讯数据通信技术有限公司 A kind of workload apparatus for evaluating and method
CN110264106A (en) * 2019-06-28 2019-09-20 浪潮卓数大数据产业发展有限公司 A kind of project work amount assessment system and method based on agile management exploitation
CN111507557A (en) * 2019-12-09 2020-08-07 武汉空心科技有限公司 Multi-role-based work platform task allocation method and system
CN111124376A (en) * 2020-01-20 2020-05-08 众能联合数字技术有限公司 Project building system for cod-eCli scaffold

Similar Documents

Publication Publication Date Title
CN102099803A (en) Method and computer system for automatically answering natural language questions
US20240185370A1 (en) Method for information recommendation based on data interaction, device, and storage medium
CN117132171B (en) Automatic office information management system
CN111414589A (en) Method, device and equipment for checking original works based on block chain
US20240144405A1 (en) Method for information interaction, device, and storage medium
CN111915228A (en) High-reliability work platform task workload assessment method
CN1231747A (en) Monitoring of load situation in a service database system
CN116089401A (en) User data management method and system
CN115841267A (en) Enterprise quota correction method of multi-layer BP neural network
CN114691791A (en) Dynamic information correlation method
CN113781011A (en) Verification method and system for project cost of power distribution network
CN113886373A (en) Data processing method and device and electronic equipment
CN111882366A (en) Method for estimating task price of working platform with contrast
CN118035527B (en) Interactive data processing method, medium and equipment for business and resource
CN117435509B (en) Dynamic comparison method, dynamic comparison device and storage medium for interface data
CN117436830B (en) Graduation student just-in-place enterprise identification system
CN113254808B (en) GIS data screening method and system
CN114936851B (en) IDC data cooperation method and system based on block chain
US20120259898A1 (en) Automatically Optimizing Business Process Platforms
CN117494702B (en) Data pushing method and system combining RPA and AI
CN116308170B (en) Management method and system applied to digital hatching service platform
CN112364032B (en) Data center data query method based on Internet technology
CN116166648A (en) Configuration method of data quality rule, data quality detection method and related device
CN112905487A (en) Self-adaptive measuring method and system for enterprise business situation
CN117593094A (en) Big data terminal sales platform system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201110