WO2019200736A1 - Operating method and device for crowdsourcing platform, computer device and storage medium - Google Patents
Operating method and device for crowdsourcing platform, computer device and storage medium Download PDFInfo
- Publication number
- WO2019200736A1 WO2019200736A1 PCT/CN2018/095318 CN2018095318W WO2019200736A1 WO 2019200736 A1 WO2019200736 A1 WO 2019200736A1 CN 2018095318 W CN2018095318 W CN 2018095318W WO 2019200736 A1 WO2019200736 A1 WO 2019200736A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- crowdsourcing
- account
- task
- test
- tasks
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/40—Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2178—Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/101—Collaborative creation, e.g. joint development of products or services
Definitions
- the present invention relates to the field of crowdsourcing, and more particularly to a method, apparatus, computer device and storage medium for a crowdsourcing platform.
- the crowdsourcing platform is a term coined in 2006 by Wired magazine to describe a new business model in which companies use the Internet to distribute work, discover ideas, or solve technical problems.
- the enterprise verifies or improves image recognition, it will issue a photo proofreading task on the crowdsourcing platform in a crowdsourced manner to collect user feedback.
- the user obtains the image proofreading task on the crowdsourcing platform, and the human identification system automatically gives the recognition result consistent with the image content. If it is inconsistent, the modification is performed, and then the “next” button is clicked to submit the answer.
- the crowdsourcing platform itself comes with the system to identify the results of the picture, so the user can choose to submit the system answer directly. Since the system verification principle is based on the same task assigned to at least three users, the three answers collected are automatically generated by cross-matching on the majority principle. When most users directly submit the wrong system answers directly, the system will This answer is judged to be correct (actually it may be wrong), so there is a user's brushing behavior loophole, which requires effective anti-brushing measures for real-time monitoring.
- a primary object of the present invention is to provide a method, apparatus, computer device and storage medium for a crowdsourcing platform that can perform behavior testing on a crowdsourced account.
- the present invention provides a method for operating a crowdsourcing platform, including:
- the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the crowdsourcing account.
- the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- the invention also provides a working device for a crowdsourcing platform, comprising:
- Sending a test unit configured to send a test task to the crowdsourcing account according to a preset sending rule; wherein the test task displays an incorrect answer in the crowdsourcing account, and the test task is compared with a conventional crowdsourcing task The same pattern;
- Receiving a comparison unit configured to receive feedback information of the crowdsourcing account for the test task, and compare the feedback information with a preset correct answer to determine whether the answer of the crowdsourcing account is correct;
- Executing an action unit if the answer to the crowdsourcing account is incorrect, retrieving other test tasks in the test task library to continue to send to the crowdsourcing account until the crowdsourcing account answers correctly, and then sending the crowdsourcing task Giving the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong number of designated times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- the present invention also provides a computer device comprising a memory and a processor, the memory storing computer readable instructions, the processor executing the computer readable instructions to implement the steps of any of the methods described above.
- the present invention also provides a computer non-transitory readable storage medium having stored thereon computer readable instructions that, when executed by a processor, implement the steps of any of the methods described above.
- the working method, device, computer equipment and storage medium of the crowdsourcing platform of the present invention because the test task and the conventional crowdsourcing task have the same mode, the operator of the crowdsourcing account does not know which of the tasks sent by the crowdsourcing platform, One is the test task, and the other is the crowdsourcing task. Therefore, the crowdsourcing platform can test whether the operator performs the brushing and so on without perceptually, and through this method, the user is continuously in the state of serious problem making, so that the crowdsourcing The platform maximizes the answers to valuable tasks. Moreover, if the crowdsourcing user continuously answers the wrong withdrawal task, it will notify the sending of the crowdsourcing task or the test task, etc., effectively preventing the crowding user's brushing behavior.
- FIG. 1 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention
- FIG. 2 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention
- FIG. 3 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention
- FIG. 4 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention
- FIG. 5 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention
- FIG. 6 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention.
- FIG. 7 is a schematic block diagram showing the structure of an execution action unit according to an embodiment of the present invention.
- FIG. 8 is a schematic block diagram showing the structure of a computer device according to an embodiment of the present invention.
- an embodiment of the present invention provides a method for operating a crowdsourcing platform, including the steps of:
- the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the public.
- a package account wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourced account.
- the above test task refers to a preset task stored in the test task library, the test task includes the task test question, the wrong answer and the correct answer, and the task test question and the wrong answer are sent when sent to the crowdsourcing account. Sent together, and the correct answer is used to compare with the feedback from the crowdsourced account.
- the above-mentioned crowdsourcing account is an account registered with the crowdsourcing platform, which can receive the tasks issued by the crowdsourcing platform, and process the crowdsourcing tasks, thereby obtaining rewards such as points, and can obtain corresponding points according to the number of points at the time of settlement. Money and other rewards.
- the above-mentioned conventional crowdsourcing task refers to a task such as an enterprise that issues a crowdsourcing task and actually needs crowdsourcing, such as sending a picture recognition task to a crowdsourcing account, to collect information on the accuracy of picture recognition, and the like.
- the test task has the same mode as the crowdsourcing task, and the user corresponding to the crowdsourcing account can be tested without perception.
- the mode of the crowdsourcing task is a picture and an explanatory text for the picture, then the test The task is also a picture and an explanatory text for the picture.
- the difference is that the interpretation of the picture in the crowdsourcing task may be correct, it may be wrong, and there is no preset correct answer, and the test task is correct.
- the explanatory text of the picture must be wrong and have the correct correct answer.
- the feedback information is the answer information of the user for the test task, and the answer information includes various situations.
- the first is that the user thinks that the answer displayed on the crowdsourcing account for the test task body is correct, and then Directly click on the feedback of the "next question”; the second is that the user finds that the answer displayed on the crowdsourced account for the test task body is wrong, and then modifies, and then submits the modified answer to the crowdsourcing platform, and then clicks "Next question” or feedback directly to the next question.
- the preset correct answer refers to the correct answer set by the crowdsourcing platform for the task test.
- the above process of “comparing the feedback information with the preset correct answer” it generally involves techniques such as semantic analysis of the words.
- the meaning of the feedback information is the same as the meaning of the correct answer, and the crowdsourcing account is correctly answered. No need to be exactly the same word, otherwise, think that the crowdsourcing account answers incorrectly.
- the above comparison result is the result of the answer correctly or the answer error.
- a large number of different test tasks are stored in the above test task library. After sending a test task to the crowdsourcing account, the feedback answer of the crowdsourcing account is wrong, and then the other test tasks are transferred from the test task library to the crowdsourcing account until the crowdsourcing account answers correctly. The crowdsourcing task is then sent to the crowdsourced account.
- continuous testing it can be judged whether the crowdsourcing account has the brushing behavior, or accidentally input the wrong answer. For example, if the user inputs the wrong answer continuously, it means that the crowdsourcing account has a brushing behavior; if the crowdsourcing account is only the first A test task is wrong, and the second test task is correct.
- the corresponding action is also performed according to the comparison result and the preset job rule.
- the preset job rule refers to a related rule set by the crowdsourcing platform, and is mainly for setting the comparison result, for example, If the answer is correct, continue to send the test task. If the test task is specified correctly for consecutively, the rules for the frequency of sending the crowdsourcing task are increased.
- the above-mentioned corresponding action is performed, that is, the crowdsourcing platform performs the operation according to the job rule. If the job rule is “An error is answered, the test task is continued to be sent”, then when the feedback information of the crowdsourcing account is different from the preset correct answer, the execution is continued. Send test task action.
- the method for calculating the answer of the crowdsourcing task by the crowdsourcing platform is a cross comparison method, which is as follows:
- the foregoing preset sending rule includes: sending a test task after sending a specified number of crowdsourcing tasks. For example, after sending 10 crowdsourcing tasks, send a test task.
- the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks in a random distribution. For example, there are a total of 20 crowdsourcing tasks, and there are 4 test tasks that can be set up. These 4 test tasks are randomly interspersed in 20 crowdsourcing tasks for sending.
- the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks relatively densely at a beginning stage of sending the crowdsourcing task .
- the test task is sent relatively randomly, and the user corresponding to the crowdsourcing account can be found to be seriously responsible in time.
- the crowdsourcing account answers the mistake when performing the test task.
- the The user is not suitable to complete this type of crowdsourcing task, or there is a brushing behavior, and then stop sending the crowdsourcing task to the user.
- the foregoing preset transmission rules are as follows:
- the method includes:
- S311 Record the cumulative error number and/or the number of consecutive error corrections of the crowdsourcing account in response to the first specified length of time;
- the first specified time length is generally several hours or one day, that is, the length of time from the start of the crowdsourcing task to the end of the settlement of each crowdsourcing account.
- Crowdsourcing tasks are generally settled on a daily basis.
- the above cumulative number of errors and answers refers to the number of times the crowdsourcing account has answered the wrong test tasks from the beginning of the crowdsourcing task until the settlement of the day.
- the above number of consecutive error-correcting numbers refers to the number of consecutive error-testing tasks using the crowdsourcing account. For example, if the user continuously answers the wrong five test tasks, the number of consecutive error-answering tasks is 5.
- the cumulative number of errors and the number of consecutive errors can be recorded at the same time, or only one type of error can be recorded, which can be set according to requirements.
- a penalty rule is set, and the penalty rule is related to the cumulative number of errors and/or the number of consecutive answers.
- the rule is: whenever the crowdsourcing account makes a wrong test task, the count of K is increased by 1, and the count of Q is incremented by one; whenever the crowdsourcing account is doing a test task, the count of K is unchanged, and the count of Q is reset. . Achieve the elimination of bad users in the shortest time and out of the crowdsourcing platform and correspondingly give different penalties.
- the specific counting and punishment process is as follows:
- the method includes:
- S313 Record a crowdsourcing account task operation situation within a second specified time length; wherein the second specified time length is greater than the first specified time length, and the second specified time length is the first Specifying a positive integer multiple of the length of time, the job situation including a cumulative number of error corrections and/or consecutive number of errors in the plurality of first time lengths of the crowdsourced account;
- step S313 and S314 it is to record the operation status of the above-mentioned crowdsourcing account, and then control the transmission frequency of the crowdsourcing task according to the operation situation. For example, if a person has multiple brushings within one month, the frequency of sending the task to the user is reduced. When the brushing condition is severe to a specified level (for example, if there is a brushing condition for 5 consecutive days), the The person's task account, etc. The brushing situation of the crowdsourcing platform can be further prevented.
- the step of retrieving another test task from the test task library to continue to be sent to the crowdsourcing account includes:
- S3101 Search, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging party belongs;
- S3102 Retrieving a test task in the first test task library to continue to send to the crowdsourcing account
- the test task library includes a plurality of test tasks, each test task library is set with a different test task, and each test task library is corresponding to a geographical area for distribution.
- the test task in the A test task library is only for The crowdsourcing account in Guangzhou area is issued; when the test tasks in the A test task library are all used once, the test task library corresponding to other areas is replaced, such as the Guangzhou A test task library and the Hunan B test task library.
- the crowdsourcing platform also records the test task number sent to each crowdsourcing account to prevent the same test task from being sent to the same crowdsourced account multiple times. Specifically, each test task sets a number, records the number of each test task that each crowdsourcing account has answered, and if it needs to send a test task to the crowdsourcing account question, obtain the test that the crowdsourcing account has been tested. The task number, and then select the test task with its different number to send to the user for testing.
- the crowdsourcing platform continuously sends test tasks to all the crowdsourcing accounts participating in the answer at different times of each day, and then collects the answer questions, analyzes the answers of people in different regions, and then understands the different regions.
- the mental state of the person thereby determining the time period for the crowdsourcing task in different regions. For example, users in the Guangzhou area have the best answer every morning at 9:00, and there is a serious brushing situation at 3 pm. In the subsequent sending task, the Guangzhou area raises the frequency of sending tasks at around 9:00 in the morning. Reduce the frequency of sending tasks, etc. at around 3 pm.
- the crowdsourcing account in Guangzhou exists in the summer. If the brushing behavior is serious, then in the summer, the frequency of crowdsourcing tasks for the crowdsourcing account in the region will be reduced, and in the winter, the answering efficiency will be higher, and in the winter, the frequency of crowdsourcing tasks in the region will be increased.
- the working method of the crowdsourcing platform further includes: interspersing a short text of the specified content between the crowdsourcing tasks. That is, in the process of sending crowdsourcing tasks, the crowdsourcing platform also intersperses and sends short essays between crowdsourcing tasks.
- the essays can be joke essays, soul chicken essays, etc. for users to alleviate fatigue.
- the short text interface such as joke or soul chicken soup
- the corresponding next question or exit reading button will be set, and the user will receive a short message such as a joke essay or a soul chicken essay, and trigger the answer (click the next question or exit the reading button).
- the length of time in turn, to determine the user's reading preferences, for example, the user will quickly click on the "exit reading" when the user encounters a cold joke essay, then after a period of analysis, no more jokes will be sent to the user.
- the working method of the crowdsourcing platform of the embodiment of the present invention because the test task and the regular crowdsourcing task have the same mode, the operator of the crowdsourcing account does not know which one of the tasks sent by the crowdsourcing platform is a test task, which One is the crowdsourcing task. Therefore, the crowdsourcing platform can test whether the operator performs the brushing and so on without perceptually, and through this method, the user can continue to be in the state of serious problem making, so that the crowdsourcing platform can obtain the maximum degree. Valuable task answers.
- an embodiment of the present invention further provides a working device of a crowdsourcing platform, including:
- the sending test unit 10 is configured to send a test task to the crowdsourcing account according to a preset sending rule; wherein the test task displays an incorrect answer in the crowdsourcing account, the test task and a conventional crowdsourcing task The same pattern;
- the receiving comparison unit 20 is configured to receive feedback information of the crowdsourcing account for the test task, and compare the feedback information with a preset correct answer to determine whether the answer of the crowdsourcing account is correct;
- Executing the action unit 30 if the answer to the crowdsourcing account is incorrect, then retrieving other test tasks in the test task library to continue to send to the crowdsourcing account until the crowdsourcing account answers correctly, and then sending the crowdsourcing The task is to the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- the above test task refers to a preset task stored in the test task library, the test task includes a task test question, a wrong answer and a correct answer, and the task test questions and errors are sent to the crowdsourcing account. The answers are sent together, and the correct answer is used to compare the feedback with the crowdsourced account.
- the above-mentioned crowdsourcing account is an account registered with the crowdsourcing platform, which can receive the tasks issued by the crowdsourcing platform, and process the crowdsourcing tasks, thereby obtaining rewards such as points, and can obtain corresponding points according to the number of points at the time of settlement. Money and other rewards.
- the above-mentioned conventional crowdsourcing task refers to a task such as an enterprise that issues a crowdsourcing task and actually needs crowdsourcing, such as sending a picture recognition task to a crowdsourcing account, to collect information on the accuracy of picture recognition, and the like.
- the test task has the same mode as the crowdsourcing task, and the user corresponding to the crowdsourcing account can be tested without perception.
- the mode of the crowdsourcing task is a picture and an explanatory text for the picture, then the test The task is also a picture and an explanatory text for the picture.
- the difference is that the interpretation of the picture in the crowdsourcing task may be correct, it may be wrong, and there is no preset correct answer, and the test task is correct.
- the explanatory text of the picture may be wrong and has the correct correct answer.
- the feedback information is the answer information of the user for the test task, and the answer information includes various situations.
- the first type is that the user thinks that the answer displayed on the crowdsourcing account for the test task body is correct. And then directly click on the feedback of the "next question”; the second is that the user finds that the answer displayed on the crowdsourced account for the test task body is wrong, and then modifies, and then submits the modified answer to the crowdsourcing platform. Then click on "Next Question” or go directly to the feedback on the next question.
- the preset correct answer refers to the correct answer set by the crowdsourcing platform for the task test.
- the above process of “comparing the feedback information with the preset correct answer” it generally involves techniques such as semantic analysis of the words.
- the meaning of the feedback information is the same as the meaning of the correct answer, and the crowdsourcing account is correctly answered. No need to be exactly the same word, otherwise, think that the crowdsourcing account answers incorrectly.
- the above-mentioned comparison result is the result of the answer correctly or the answer error.
- a large number of different test tasks are stored in the above test task library. After sending a test task to the crowdsourcing account, the feedback answer of the crowdsourcing account is wrong, and then the other test tasks are transferred from the test task library to the crowdsourcing account until the crowdsourcing account answers correctly. The crowdsourcing task is then sent to the crowdsourced account.
- continuous testing it can be judged whether the crowdsourcing account has the brushing behavior, or accidentally input the wrong answer. For example, if the user inputs the wrong answer continuously, it means that the crowdsourcing account has a brushing behavior; if the crowdsourcing account is only the first A test task is wrong, and the second test task is correct.
- the corresponding action is also performed according to the comparison result and the preset job rule.
- the preset job rule refers to a related rule set by the crowdsourcing platform, and is mainly for setting the comparison result, for example, If the answer is correct, continue to send the test task. If the test task is specified correctly for consecutively, the rules for the frequency of sending the crowdsourcing task are increased.
- the above-mentioned corresponding action is performed, that is, the crowdsourcing platform performs the operation according to the job rule. If the job rule is “An error is answered, the test task is continued to be sent”, then when the feedback information of the crowdsourcing account is different from the preset correct answer, the execution is continued. Send test task action.
- the method for calculating the answer of the crowdsourcing task by the crowdsourcing platform is a cross comparison method, which is as follows:
- the foregoing preset sending rule includes: sending a test task after sending a specified number of crowdsourcing tasks. For example, after sending 10 crowdsourcing tasks, send a test task.
- the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks in a random distribution. For example, there are a total of 20 crowdsourcing tasks, and there are 4 test tasks that can be set up. These 4 test tasks are randomly interspersed in 20 crowdsourcing tasks for sending.
- the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks relatively densely at a beginning stage of sending the crowdsourcing task .
- the test task is sent relatively randomly, and the user corresponding to the crowdsourcing account can be found to be seriously responsible in time.
- the crowdsourcing account answers the mistake when performing the test task.
- the The user is not suitable to complete this type of crowdsourcing task, or there is a brushing behavior, and then stop sending the crowdsourcing task to the user.
- the foregoing preset transmission rules are as follows:
- the working device of the above-mentioned crowdsourcing platform further includes:
- a first recording unit 311, configured to record a cumulative error number and/or a continuous error number of the crowdsourcing account answer error within a first specified time period
- the punishment unit 312 is configured to punish the crowdsourcing account according to the accumulated error number and/or the number of consecutive error answers.
- the first specified time length is generally several hours or one day, that is, the length of time each crowdsourcing account ends from the start of the crowdsourcing task to the settlement.
- Crowdsourcing tasks are generally settled on a daily basis.
- the above cumulative number of errors and answers refers to the number of times the crowdsourcing account has answered the wrong test tasks from the beginning of the crowdsourcing task until the settlement of the day.
- the above number of consecutive error-correcting numbers refers to the number of consecutive error-testing tasks using the crowdsourcing account. For example, if the user continuously answers the wrong five test tasks, the number of consecutive error-answering tasks is 5.
- the cumulative number of errors and the number of consecutive errors can be recorded at the same time, or only one type of error can be recorded, which can be set according to requirements.
- a penalty rule is set, and the penalty rule is related to the cumulative number of errors and/or the number of consecutive answers.
- the rule is: whenever the crowdsourcing account makes a wrong test task, the count of K is increased by 1, and the count of Q is incremented by one; whenever the crowdsourcing account is doing a test task, the count of K is unchanged, and the count of Q is reset. . Achieve the elimination of bad users in the shortest time and out of the crowdsourcing platform and correspondingly give different penalties.
- the specific counting and punishment process is as follows:
- the working device of the above-mentioned crowdsourcing platform further includes:
- a second recording unit 313, configured to record a crowdsourcing account task operation situation within a second specified time length; wherein the second specified time length is greater than the first specified time length, and the second specified time The length is a positive integer multiple of the first specified length of time, and the job situation includes a cumulative number of error corrections and/or consecutive error corrections of the crowdsourced account over a plurality of the first length of time;
- the sending control unit 314 is configured to control a sending frequency sent to the crowdsourcing account according to the crowdsourcing account task operation situation.
- the second recording unit 313 and the transmission control unit 314 are units for recording the operation status of the above-mentioned crowdsourcing account, and control the transmission frequency of the crowdsourcing task according to the work situation. For example, if a person has multiple brushings within one month, the frequency of sending the task to the user is reduced. When the brushing condition is severe to a specified level (for example, if there is a brushing condition for 5 consecutive days), the The person's task account, etc. The brushing situation of the crowdsourcing platform can be further prevented.
- the executing action unit 30 includes:
- the finding sub-module 3101 is configured to search, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging party belongs;
- Retrieving a sub-module 3102 configured to retrieve a test task in the first test task library and continue to send to the crowdsourcing account
- the determining sub-module 3103 is configured to determine whether the test tasks in the first test task library are all used once;
- the replacement sub-module 3104 is configured to replace the second test task library corresponding to the other geographical regions if the test tasks in the first test task library are all used once.
- the above test task library includes a plurality of test tasks, and each test task library is set up with a different test task.
- the test task in the A test task library is only for the crowdsourcing account in the Guangzhou area. Issuance; when the test tasks in the A test task library are all used once, the test task library corresponding to other areas is replaced, such as the Guangzhou A test task library and the Hunan B test task library are replaced, reducing the same test task. Multiple occurrences in the same area, if the same question is answered twice by the same person, it will attract the attention of the user, and then know that it is a test task and take it seriously, and continue to brush points for other questions.
- the job device of the crowdsourcing platform further includes a third recording unit for recording the test task number sent to each crowdsourcing account to prevent the same test task from being sent to the same crowdsourcing account multiple times.
- each test task sets a number, records the number of each test task that each crowdsourcing account has answered, and if it needs to send a test task to the crowdsourcing account question, obtain the test that the crowdsourcing account has been tested. The task number, and then select the test task with its different number to send to the user for testing.
- the job device of the crowdsourcing platform further includes a sending and collecting unit, configured to send a test task to all the crowdsourced accounts participating in the question at different times of each day for consecutive days, and then collect the answer questions and analyze different The situation of people in the region, and then understand the psychological state of people in different regions, so as to determine the sending time of crowdsourcing tasks in different regions.
- a sending and collecting unit configured to send a test task to all the crowdsourced accounts participating in the question at different times of each day for consecutive days, and then collect the answer questions and analyze different The situation of people in the region, and then understand the psychological state of people in different regions, so as to determine the sending time of crowdsourcing tasks in different regions.
- a sending and collecting unit configured to send a test task to all the crowdsourced accounts participating in the question at different times of each day for consecutive days, and then collect the answer questions and analyze different The situation of people in the region, and then understand the psychological state of people in different regions, so as to determine the sending time of crowdsourcing tasks in different regions.
- the crowdsourcing account in Guangzhou exists in the summer. If the brushing behavior is serious, then in the summer, the frequency of crowdsourcing tasks for the crowdsourcing account in the region will be reduced, and in the winter, the answering efficiency will be higher, and in the winter, the frequency of crowdsourcing tasks in the region will be increased.
- the job device of the crowdsourcing platform further includes a short text unit for interspersing the short text of the specified content between the crowdsourcing tasks. That is, in the process of sending the crowdsourcing task, the short message is interspersed between the crowdsourcing tasks, and the short text can be a joke short text, a soul chicken soup essay, etc. for the user to relieve fatigue.
- the operation device of the crowdsourcing platform further includes a monitoring unit, which is configured to set a corresponding next question or exit reading button in a short text interface such as a joke or a chicken soup, and record the short text after the user receives the joke short text and the soul chicken soup essay. , the length of time that triggers the answer (click the next question or exit the button), and then judge the user's reading preferences. For example, if the user encounters a cold joke essay, he will quickly click “exit reading”, then analyze for a period of time. After that, no more jokes will be sent to the user.
- the computer device may be a server, and its internal structure may be as shown in FIG. 8.
- the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the computer designed processor is used to provide calculation and control capabilities.
- the memory of the computer device includes a non-volatile storage medium, an internal memory.
- the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
- the memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium.
- the database of the computer device is used to store data such as test tasks.
- the network interface of the computer device is used to communicate with an external terminal via a network connection.
- the computer readable instructions are executed by a processor to implement the processes of the various method embodiments described above.
- a computer non-volatile readable storage medium is further provided, where the computer readable instructions are stored, and when the computer readable instructions are executed by the processor, the processes of the foregoing method embodiments are implemented.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Resources & Organizations (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Strategic Management (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- Human Computer Interaction (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Mobile Radio Communication Systems (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The present invention discloses an operating method and device for a crowdsourcing platform, a computer device and a storage medium. The method comprises: sending a test task to a crowdsourcing account; receiving feedback information of the crowdsourcing account and comparing the feedback information with a preset right answer to determine whether the answer of the crowdsourcing account is correct; if not, calling other test tasks in a test task library and sending to the crowdsourcing account until the answer of the crowdsourcing account is correct, and then sending a crowdsourcing task to the crowdsourcing account.
Description
本申请要求于2018年4月17日提交中国专利局、申请号为2018103438779,申请名称为“众包平台的作业方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application entitled "Working Methods, Devices, Computer Equipment and Storage Media for Crowdsourcing Platforms", filed on April 17, 2018, with the Chinese Patent Office, Application No. 2018103438779, the entire contents of which is hereby incorporated by reference. This is incorporated herein by reference.
本发明涉及到众包领域,特别是涉及到一种众包平台的作业方法、装置、计算机设备和存储介质。The present invention relates to the field of crowdsourcing, and more particularly to a method, apparatus, computer device and storage medium for a crowdsourcing platform.
众包平台是《连线》(Wired)杂志2006年发明的一个专业术语,用来描述一种新的商业模式,即企业利用互联网来将工作分配出去、发现创意或解决技术问题。企业对图像识别进行验证或改进时,会以众包的方式在众包平台上发放图片校对任务,以收集用户的反馈。The crowdsourcing platform is a term coined in 2006 by Wired magazine to describe a new business model in which companies use the Internet to distribute work, discover ideas, or solve technical problems. When the enterprise verifies or improves image recognition, it will issue a photo proofreading task on the crowdsourcing platform in a crowdsourced manner to collect user feedback.
用户在众包平台获取图片校对任务,并人为判断系统自动给出的识别结果与图片内容是否一致,如不一致,则进行修改,然后点击“下一张”按钮进行答案提交。众包平台本身自带系统对于图片的识别结果,故用户可选择直接提交系统答案。由于系统校验原则是基于同一任务派发给至少三个用户,将收集的三个答案以多数原则进行交叉比对自动生成,那么当多数用户都直接采用错误的系统答案进行直接提交时,系统会将此答案判断为正确(实际上可能是错误的),故存在用户的刷分行为漏洞,需要有效的防刷分措施进行实时监控。The user obtains the image proofreading task on the crowdsourcing platform, and the human identification system automatically gives the recognition result consistent with the image content. If it is inconsistent, the modification is performed, and then the “next” button is clicked to submit the answer. The crowdsourcing platform itself comes with the system to identify the results of the picture, so the user can choose to submit the system answer directly. Since the system verification principle is based on the same task assigned to at least three users, the three answers collected are automatically generated by cross-matching on the majority principle. When most users directly submit the wrong system answers directly, the system will This answer is judged to be correct (actually it may be wrong), so there is a user's brushing behavior loophole, which requires effective anti-brushing measures for real-time monitoring.
本发明的主要目的为提供一种可以对众包账户进行刷分等行为测试的众包平台的作业方法、装置、计算机设备和存储介质。SUMMARY OF THE INVENTION A primary object of the present invention is to provide a method, apparatus, computer device and storage medium for a crowdsourcing platform that can perform behavior testing on a crowdsourced account.
为了实现上述发明目的,本发明提出一种众包平台的作业方法,包括:In order to achieve the above object, the present invention provides a method for operating a crowdsourcing platform, including:
按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test task to the crowdsourcing account according to a preset sending rule; wherein the answer displayed by the test task in the crowdsourcing account is a wrong answer, and the test task is in the same mode as a conventional crowdsourcing task;
接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving feedback information of the crowdsourcing account for the test task, and comparing the feedback information with a preset correct answer, and determining whether the answer of the crowdsourcing account is correct;
若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。If the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the crowdsourcing account. Wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
本发明还提供一种众包平台的作业装置,包括:The invention also provides a working device for a crowdsourcing platform, comprising:
发送测试单元,用于按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test unit, configured to send a test task to the crowdsourcing account according to a preset sending rule; wherein the test task displays an incorrect answer in the crowdsourcing account, and the test task is compared with a conventional crowdsourcing task The same pattern;
接收比对单元,用于接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving a comparison unit, configured to receive feedback information of the crowdsourcing account for the test task, and compare the feedback information with a preset correct answer to determine whether the answer of the crowdsourcing account is correct;
执行动作单元,用于若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。Executing an action unit, if the answer to the crowdsourcing account is incorrect, retrieving other test tasks in the test task library to continue to send to the crowdsourcing account until the crowdsourcing account answers correctly, and then sending the crowdsourcing task Giving the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong number of designated times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
本发明还提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述任一项所述方法的步骤。The present invention also provides a computer device comprising a memory and a processor, the memory storing computer readable instructions, the processor executing the computer readable instructions to implement the steps of any of the methods described above.
本发明还提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述任一项所述的方法的步骤。The present invention also provides a computer non-transitory readable storage medium having stored thereon computer readable instructions that, when executed by a processor, implement the steps of any of the methods described above.
本发明的众包平台的作业方法、装置、计算机设备和存储介质,因为测试任务和常规的众包任务的模式相同,所以众包账户的作业人员不会知道众包平台发送的任务中,哪一个是测试任务,哪一个是众包任务,因此,众包平台能无感知地测试作业人员是否进行刷分等行为,并通过此方法持续让用户处于认真做题的状态之中,使众包平台能够最大程度地获取有价值的任务答案。而且,如果众包用户连续答错撤人任务,则会通知发送众包任务或测试任务等,有效地防止众包用户的刷分行为。The working method, device, computer equipment and storage medium of the crowdsourcing platform of the present invention, because the test task and the conventional crowdsourcing task have the same mode, the operator of the crowdsourcing account does not know which of the tasks sent by the crowdsourcing platform, One is the test task, and the other is the crowdsourcing task. Therefore, the crowdsourcing platform can test whether the operator performs the brushing and so on without perceptually, and through this method, the user is continuously in the state of serious problem making, so that the crowdsourcing The platform maximizes the answers to valuable tasks. Moreover, if the crowdsourcing user continuously answers the wrong withdrawal task, it will notify the sending of the crowdsourcing task or the test task, etc., effectively preventing the crowding user's brushing behavior.
图1 为本发明一实施例的众包平台的作业方法的流程示意图;1 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention;
图2 为本发明一实施例的众包平台的作业方法的流程示意图;2 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention;
图3 为本发明一实施例的众包平台的作业方法的流程示意图;3 is a schematic flow chart of a method for operating a crowdsourcing platform according to an embodiment of the present invention;
图4 为本发明一实施例的众包平台的作业装置的结构示意框图;4 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention;
图5为本发明一实施例的众包平台的作业装置的结构示意框图;FIG. 5 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention; FIG.
图6为本发明一实施例的众包平台的作业装置的结构示意框图;6 is a schematic block diagram showing the structure of a working device of a crowdsourcing platform according to an embodiment of the present invention;
图7为本发明一实施例的执行动作单元的结构示意框图;FIG. 7 is a schematic block diagram showing the structure of an execution action unit according to an embodiment of the present invention; FIG.
图8 为本发明一实施例的计算机设备的结构示意框图。FIG. 8 is a schematic block diagram showing the structure of a computer device according to an embodiment of the present invention.
参照图1,本发明实施例提供一种众包平台的作业方法,包括步骤:Referring to FIG. 1, an embodiment of the present invention provides a method for operating a crowdsourcing platform, including the steps of:
S1、按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;S1, sending a test task to the crowdsourcing account according to a preset sending rule; wherein, the answer displayed by the test task in the crowdsourcing account is a wrong answer, and the test task is in the same mode as a conventional crowdsourcing task;
S2、接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;S2. Receiving feedback information of the crowdsourcing account for the test task, and comparing the feedback information with a preset correct answer, and determining whether the answer of the crowdsourcing account is correct;
S3、若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。S3. If the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the public. A package account; wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourced account.
如上述步骤S1所述,上述测试任务是指存储在测试任务库中的预设任务,该测试任务包含任务试题、错误答案和正确答案,在发送给众包账户时,将任务试题和错误答案一起发送,而正确答案则用于与众包账户的反馈信息进行比较。上述的众包账户是指针对众包平台注册的账户,其可以接收众包平台发布的任务,并对众包任务进行处理,进而得到积分等奖励,在结算时可以根据积分的数量获得相应的金钱等奖励。上述常规的众包任务是指发布众包任务的企业等实际需要众包的任务,如发送图片识别的任务给众包账户,以收集图片识别的准确性的信息等。本实施例中,测试任务与众包任务的模式相同,可以使众包账户对应的用户无感知的进行测试,比如,众包任务的模式为一张图片和一条对图片的解释文字,那么测试任务同样是一张图片和一条对图片的解释文字,区别在于,众包任务中对图片的解释文字可能是正确的,也可能是错误的,并且没有预设的正确答案,而测试任务中对图片的解释文字一定是错误的,并且有预设的正确答案。As described in the above step S1, the above test task refers to a preset task stored in the test task library, the test task includes the task test question, the wrong answer and the correct answer, and the task test question and the wrong answer are sent when sent to the crowdsourcing account. Sent together, and the correct answer is used to compare with the feedback from the crowdsourced account. The above-mentioned crowdsourcing account is an account registered with the crowdsourcing platform, which can receive the tasks issued by the crowdsourcing platform, and process the crowdsourcing tasks, thereby obtaining rewards such as points, and can obtain corresponding points according to the number of points at the time of settlement. Money and other rewards. The above-mentioned conventional crowdsourcing task refers to a task such as an enterprise that issues a crowdsourcing task and actually needs crowdsourcing, such as sending a picture recognition task to a crowdsourcing account, to collect information on the accuracy of picture recognition, and the like. In this embodiment, the test task has the same mode as the crowdsourcing task, and the user corresponding to the crowdsourcing account can be tested without perception. For example, the mode of the crowdsourcing task is a picture and an explanatory text for the picture, then the test The task is also a picture and an explanatory text for the picture. The difference is that the interpretation of the picture in the crowdsourcing task may be correct, it may be wrong, and there is no preset correct answer, and the test task is correct. The explanatory text of the picture must be wrong and have the correct correct answer.
如上述步骤S2所述,上述反馈信息即为用户针对测试任务的回答信息,回答信息包括多种情况,第一种是,用户认为众包账户上针对测试任务体显示的答案是正确的,进而直接点击“下一题”的反馈;第二种是,用户发觉众包账户上针对测试任务体显示的答案是错误的,进而进行修改,然后将修改后的答案提交给众包平台,然后点击“下一题”或直接进入下一题的反馈等。本实施例中,上述预设的正确答案,是指众包平台针对任务试题设置的正确答案。在上述“将反馈信息与预设的正确答案进行比对”的过程中,一般涉及到文字语义分析等技术,反馈信息表达的意思与正确答案表达的意思相同即可认为众包账户回答正确的,无需一字不差的完全相同,否则,认为众包账户回答错误。As described in the above step S2, the feedback information is the answer information of the user for the test task, and the answer information includes various situations. The first is that the user thinks that the answer displayed on the crowdsourcing account for the test task body is correct, and then Directly click on the feedback of the "next question"; the second is that the user finds that the answer displayed on the crowdsourced account for the test task body is wrong, and then modifies, and then submits the modified answer to the crowdsourcing platform, and then clicks "Next question" or feedback directly to the next question. In this embodiment, the preset correct answer refers to the correct answer set by the crowdsourcing platform for the task test. In the above process of “comparing the feedback information with the preset correct answer”, it generally involves techniques such as semantic analysis of the words. The meaning of the feedback information is the same as the meaning of the correct answer, and the crowdsourcing account is correctly answered. No need to be exactly the same word, otherwise, think that the crowdsourcing account answers incorrectly.
如上述步骤S3所述,上述的比对结果即为回答正确或者回答错误两种结果。上述测试任务库中存储有大量的不同测试任务。当发送给众包账户一个测试任务后,众包账户的反馈答案是错误的,则会继续从测试任务库中调取其他的测试任务发送给众包账户,直到所述众包账户回答正确,然后发送所述众包任务给所述众包账户。通过连续的测试,可以判断出众包账户是存在刷分行为,还是不小心输入错误的答案等,比如,用户连续输入错的答案,则说明众包账户存在刷分行为;如果众包账户只是第一道测试任务做错了,而第二道测试任务做对了,则说明众包账户的用户只是做错题了,而不是故意刷分等。为了提高众包任务的回答准确性,如果用户存在刷分行为,则停止发送给该众包账户众包任务。在其它实施例中,还会根据比对结果以及预设的作业规则执行相应的动作,上述预设的作业规则,是指众包平台设置的相关规则,主要是针对比对结果进行设置,比如,如果回答正确,则继续发送测试任务,如果连续正确回答测试任务指定次数,则提高发送众包任务的频率等规则。上述执行相应的动作,即众包平台按照作业规则进行作业,如作业规则是“回答错误,则继续发送测试任务”,那么当众包账户的反馈信息与预设的正确答案不同,则执行“继续发送测试任务”的动作。As described in the above step S3, the above comparison result is the result of the answer correctly or the answer error. A large number of different test tasks are stored in the above test task library. After sending a test task to the crowdsourcing account, the feedback answer of the crowdsourcing account is wrong, and then the other test tasks are transferred from the test task library to the crowdsourcing account until the crowdsourcing account answers correctly. The crowdsourcing task is then sent to the crowdsourced account. Through continuous testing, it can be judged whether the crowdsourcing account has the brushing behavior, or accidentally input the wrong answer. For example, if the user inputs the wrong answer continuously, it means that the crowdsourcing account has a brushing behavior; if the crowdsourcing account is only the first A test task is wrong, and the second test task is correct. It means that the user of the crowdsourcing account just made the wrong question, instead of deliberately brushing the score. In order to improve the answer accuracy of the crowdsourcing task, if the user has a brushing behavior, the sending to the crowdsourcing account crowdsourcing task is stopped. In other embodiments, the corresponding action is also performed according to the comparison result and the preset job rule. The preset job rule refers to a related rule set by the crowdsourcing platform, and is mainly for setting the comparison result, for example, If the answer is correct, continue to send the test task. If the test task is specified correctly for consecutively, the rules for the frequency of sending the crowdsourcing task are increased. The above-mentioned corresponding action is performed, that is, the crowdsourcing platform performs the operation according to the job rule. If the job rule is “An error is answered, the test task is continued to be sent”, then when the feedback information of the crowdsourcing account is different from the preset correct answer, the execution is continued. Send test task action.
本实施例中,上述众包平台计算众包任务的答案的方法为交叉比对法,具体如下:In this embodiment, the method for calculating the answer of the crowdsourcing task by the crowdsourcing platform is a cross comparison method, which is as follows:
当众包平台派发给n个众包账户同一道众包任务进行验证,得到答案A1 ,A2,…… An,若存在相同答案A1=
A2 =Ax,则记B1=1+1+1=3,以此类推共有k个不同答案,得到B1,B2,……Bk。若B1,B2,……Bk 中有且只有一个最大值B,且 > 50%,则校验成功,众包平台采用B对应的多数答案。反之,校验失败,系统将重新派发该众包任务等。而对测试任务则直接与预设的正确答案进行比对,无需进行交叉比对等动作。上述的交叉比对法,众包平台无需准备正确的答案,只需要在多个答案中查找出相同答案数量最多的一个答案即可,该相同答案数量最多的答案即会被认为是正确的答案。When the crowdsourcing platform distributes the same crowdsourcing task to n crowdsourcing accounts for verification, get the answers A1, A2, ... An, if there is the same answer A1=
A2 = Ax, then B1 = 1 + 1 + 1 = 3, and so on, a total of k different answers, get B1, B2, ... Bk. If there is one and only one maximum B in B1, B2, ... Bk, and > 50%, the verification is successful, and the crowdsourcing platform adopts the majority answer corresponding to B. Conversely, if the verification fails, the system will re-deliver the crowdsourcing task and so on. The test task is directly compared with the preset correct answer, and no cross comparison is required. With the above cross-matching method, the crowdsourcing platform does not need to prepare the correct answer. It only needs to find the most answers with the same number of answers in multiple answers. The answer with the largest number of identical answers will be regarded as the correct answer. .
本实施例中,上述预设的发送规则,包括:每发送指定数量的众包任务后,发送一道测试任务。比如,每发送10道众包任务后,发送一道测试任务。In this embodiment, the foregoing preset sending rule includes: sending a test task after sending a specified number of crowdsourcing tasks. For example, after sending 10 crowdsourcing tasks, send a test task.
在另一实施例中,上述预设的发送规则,包括:按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务。比如,共有20道众包任务,那边么可以设置4道测试任务,这4道测试任务随机穿插在20道众包任务中进行发送。In another embodiment, the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks in a random distribution. For example, there are a total of 20 crowdsourcing tasks, and there are 4 test tasks that can be set up. These 4 test tasks are randomly interspersed in 20 crowdsourcing tasks for sending.
在又一实施例中,上述预设的发送规则,包括:按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。比如,共有100道众包任务,设置有20道测试任务,在发送前10次的众包任务中穿插5道测试任务,在后90次中随机穿插其余的15道测试任务。在发送众包任务的开始阶段,相对密集地随机发送所述测试任务,可以及时发现众包账户对应的用户是否认真负责,比如,众包账户在做测试任务时都是回答错误,显然,该用户不适合完成该类型的众包任务,或者是存在刷分行为,进而停止继续给该用户发送众包任务。In still another embodiment, the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks relatively densely at a beginning stage of sending the crowdsourcing task . For example, there are 100 crowdsourcing tasks, 20 test tasks are set up, 5 test tasks are interspersed in the top 10 crowdsourcing tasks, and the remaining 15 test tasks are randomly interspersed in the last 90 times. In the initial stage of sending the crowdsourcing task, the test task is sent relatively randomly, and the user corresponding to the crowdsourcing account can be found to be seriously responsible in time. For example, the crowdsourcing account answers the mistake when performing the test task. Obviously, the The user is not suitable to complete this type of crowdsourcing task, or there is a brushing behavior, and then stop sending the crowdsourcing task to the user.
在一具体实施例中,上述预设的发送规则如下表所示:In a specific embodiment, the foregoing preset transmission rules are as follows:
测试任务和众包任务总数N Total number of test tasks and crowdsourcing tasks N | 测试任务分布 Test task distribution |
N=[0-15] N=[0-15] | 无测试任务 No test task |
N=[16-40] N=[16-40] | 随机分布5道测试任务(共5道测试任务) Randomly distributed 5 test tasks (5 test tasks in total) |
N=[41-100] N=[41-100] | 每15道分布一道测试任务(共4道测试任务) One test task per 15 channels (a total of 4 test tasks) |
N=[101-200] N=[101-200] | 每25道分布一道测试任务(共4道测试任务) One test task per 25 channels (a total of 4 test tasks) |
N=[201-400] N=[201-400] | 每20道分布一道测试任务(共10道测试任务) One test task per 20 channels (10 test tasks in total) |
N>400 N>400 | 重复上述测试任务分布,值不发生变化 Repeat the above test task distribution, the value does not change |
参照图2,本实施例中,上述若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户的步骤S3之后,包括:Referring to FIG. 2, in the embodiment, if the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then Sending a crowdsourcing task to the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong specified number of times, after the step S3 of suspending the sending of the crowdsourcing task and the testing task to the crowdsourcing account, the method includes:
S311、记录第一指定时间长度内,所述众包账户回答错误的累计答错数和/或连续答错数;S311: Record the cumulative error number and/or the number of consecutive error corrections of the crowdsourcing account in response to the first specified length of time;
S312、根据所述累计答错数和/或连续答错数,对所述众包账户进行惩罚。S312. Punve the crowdsourcing account according to the accumulated error number and/or the number of consecutive error answers.
如上述步骤S311所述,上述第一指定时间长度一般为几个小时或者一天的时间,即每个众包账户从开始完成众包任务至结算时结束的时间长度。众包任务一般都是一天一结算的。上述累计答错数是指众包账户的从一开始做众包任务直至当天的结算为止,一共答错测试任务的次数。上述连续答错数是指用众包账户连续答错测试任务的数量,比如,用户连续答错5道测试任务,则连续答错数为5。本实施例中,可以同时记录累计答错数和连续答错数,也可以只记录一种答错数,可以根据要求进行设置。As described in the above step S311, the first specified time length is generally several hours or one day, that is, the length of time from the start of the crowdsourcing task to the end of the settlement of each crowdsourcing account. Crowdsourcing tasks are generally settled on a daily basis. The above cumulative number of errors and answers refers to the number of times the crowdsourcing account has answered the wrong test tasks from the beginning of the crowdsourcing task until the settlement of the day. The above number of consecutive error-correcting numbers refers to the number of consecutive error-testing tasks using the crowdsourcing account. For example, if the user continuously answers the wrong five test tasks, the number of consecutive error-answering tasks is 5. In this embodiment, the cumulative number of errors and the number of consecutive errors can be recorded at the same time, or only one type of error can be recorded, which can be set according to requirements.
如上述步骤S312所述,会设置惩罚规则,而惩罚规则与累计答错数和/或连续答错数有关。As described in step S312 above, a penalty rule is set, and the penalty rule is related to the cumulative number of errors and/or the number of consecutive answers.
在一具体实施例中,测试任务的答错数计算有两个维度:累计答错数(K)以及连续答错数(Q),其中K={0,1,2,3,4,5},Q=={0,1,2,3,4}。规则是:每当众包账户做错一道测试任务时,K的计数加1,Q的计数加1;每当众包账户做对一道测试任务时,K的计数不变,Q的计数重置清零。实现在最短的时间内将不良用户摒除在众包平台之外并对应给出不同力度的惩罚措施。具体计数和惩罚过程如下:In a specific embodiment, the error number calculation of the test task has two dimensions: a cumulative error number (K) and a continuous error number (Q), where K = {0, 1, 2, 3, 4, 5 },Q=={0,1,2,3,4}. The rule is: whenever the crowdsourcing account makes a wrong test task, the count of K is increased by 1, and the count of Q is incremented by one; whenever the crowdsourcing account is doing a test task, the count of K is unchanged, and the count of Q is reset. . Achieve the elimination of bad users in the shortest time and out of the crowdsourcing platform and correspondingly give different penalties. The specific counting and punishment process is as follows:
1、当众包账户答错第一道测试任务时,标记K=1,Q=1(K、Q的初始值为0);1. When the crowdsourcing account answers the wrong test task, mark K=1, Q=1 (the initial value of K and Q is 0);
2、由于当众包账户在任意一道测试任务出错时,其所做的下一题也会是测试人任务,此时:(1)若众包账户答对第二道测试任务(即未连续答错测试任务),K的计数不变,Q重置为0,即此时K=1,Q=0;(2)若众包账户再次答错测试任务(即连错两题),同时标记错题数K=K+1,Q=Q+1,即此时K=2,Q=2;2. Because when the crowdsourcing account fails in any one of the test tasks, the next question it will be will be the tester's task. At this time: (1) If the crowdsourcing account answers the second test task (ie, the error is not continuously answered) Test task), the count of K is unchanged, Q is reset to 0, that is, K=1, Q=0 at this time; (2) If the crowdsourcing account answers the wrong test task again (that is, two questions are wrong), and the error is marked at the same time. The number of questions K=K+1, Q=Q+1, that is, K=2 and Q=2 at this time;
3、按照以上规则,每当用户完成一道测试任务,实时更新K与Q的计数。3. According to the above rules, whenever the user completes a test task, the K and Q counts are updated in real time.
4、K、Q的不同计数分别对应不同的惩罚系数DK、DQ值,具体对应关系分别见如下两个表格:4. The different counts of K and Q correspond to different penalty coefficients DK and DQ, respectively. The specific correspondences are shown in the following two tables:
K与DK值对应关系Correspondence between K and DK values
K K | 0 0 | 1 1 | 2 2 | 3 3 | 4 4 | 5 5 |
DK DK | 0 0 | 0 0 | 0 0 | 0.4 0.4 | 0.8 0.8 | 1 1 |
Q与DQ值对应关系Correspondence between Q and DQ values
Q Q | 0 0 | 1 1 | 2 2 | 3 3 | 4 4 |
DQ DQ | 0 0 | 0 0 | 0.2 0.2 | 0.8 0.8 | 1 1 |
DK,DQ的分值代表众包平台将扣除众包账户当天的任务积分的比例,取两者的最大值作为惩罚系数的当前有效值。例如:当DK=0.4,DQ=0.8, 系统将取0.8为有效值,扣除用户当天80%的任务积分作为惩罚。此外,DK=1时,系统除了扣除众包账户当天所做的所有任务积分之外,还会将众包账户今天的任务完成单数清零,重新计数;DQ=1时,系统除了清零任务单数与任务积分之外,会在一个小时之内暂停对该刷分用户的任务派发。The score of DK and DQ means that the crowdsourcing platform will deduct the proportion of the task points on the day of the crowdsourcing account, and take the maximum value of the two as the current effective value of the penalty factor. For example, when DK=0.4, DQ=0.8, the system will take 0.8 as the effective value, deducting 80% of the user's task points for the day as a penalty. In addition, when DK=1, the system will clear all the task scores of the crowdsourcing account on the day of the crowdsourcing account, and re-count the number of tasks completed today; In addition to the singular and task points, the task assignment to the user of the brush is suspended within one hour.
参照图3,本实施例中,上述记录第一指定时间长度内所述众包账户回答错误的累计答错数和/或连续答错数的步骤S311之后,包括:Referring to FIG. 3, in the embodiment, after the step S311 of recording the cumulative error number and/or the number of consecutive error answers of the crowdsourcing account in the first specified time period, the method includes:
S313、记录第二指定时间长度内,所述众包账户任务作业情况;其中,所述第二指定时间长度大于所述第一指定时间长度,并且所述第二指定时间长度是所述第一指定时间长度的正整数倍,所述作业情况包括所述众包账户在多个所述第一时间长度内的累计答错数和/或连续答错数;S313: Record a crowdsourcing account task operation situation within a second specified time length; wherein the second specified time length is greater than the first specified time length, and the second specified time length is the first Specifying a positive integer multiple of the length of time, the job situation including a cumulative number of error corrections and/or consecutive number of errors in the plurality of first time lengths of the crowdsourced account;
S314、根据所述众包账户任务作业情况,控制发送给所述众包账户的发送频率。S314. Control a sending frequency sent to the crowdsourcing account according to the task status of the crowdsourcing account task.
如上述步骤S313和S314所述,即为记录上述众包账户的作业情况,然后根据作业情况控制众包任务的发送频率。比如,某人在1个月内出现多次刷分情况,则降低发送给该用户任务的频率,当刷分情况严重到指定程度(比如连续5天都存在刷分情况)后,禁封该人的任务账号等。可以进一步地防止众包平台的刷分情况。As described in steps S313 and S314 above, it is to record the operation status of the above-mentioned crowdsourcing account, and then control the transmission frequency of the crowdsourcing task according to the operation situation. For example, if a person has multiple brushings within one month, the frequency of sending the task to the user is reduced. When the brushing condition is severe to a specified level (for example, if there is a brushing condition for 5 consecutive days), the The person's task account, etc. The brushing situation of the crowdsourcing platform can be further prevented.
本实施例中,上述在测试任务库中调取其它测试任务继续发送给所述众包账户的步骤,包括:In this embodiment, the step of retrieving another test task from the test task library to continue to be sent to the crowdsourcing account includes:
S3101、根据所述众包账户的所属地理区域,查找与所述所属地理区域对应的第一测试任务库;S3101: Search, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging party belongs;
S3102、在所述第一测试任务库中调取测试任务继续发送给所述众包账户;S3102: Retrieving a test task in the first test task library to continue to send to the crowdsourcing account;
S3103、判断所述第一测试任务库中的测试任务是否全部被使用过一次;S3103. Determine whether the test tasks in the first test task library are all used once;
S3104、若是,则与其它所属地理区域对应的第二测试任务库进行互相替换。S3104. If yes, the second test task library corresponding to the other geographical regions is replaced with each other.
如上述步骤S3101、 S3102、
S3103和 S3104所述,上述测试任务库包括多个,每个测试任务库中设置不同的测试任务,每个测试任务库对应一个地理区域进行发放,比如,A测试任务库中的测试任务只针对广州地区的众包账户发放;当A测试任务库中的测试任务全部被使用一遍之后,与其它区域对应的测试任务库进行替换,如广州的A测试任务库与湖南的B测试任务库进行替换,减少同一测试任务在同一区域内的多次出现,如果同一题被同一人回答两次,则会引起用户的注意,进而知道是测试任务而认真对待,对于其它题则继续刷分。Steps S3101, S3102, as described above
S3103 and S3104, the test task library includes a plurality of test tasks, each test task library is set with a different test task, and each test task library is corresponding to a geographical area for distribution. For example, the test task in the A test task library is only for The crowdsourcing account in Guangzhou area is issued; when the test tasks in the A test task library are all used once, the test task library corresponding to other areas is replaced, such as the Guangzhou A test task library and the Hunan B test task library. , to reduce the multiple occurrences of the same test task in the same area, if the same question is answered twice by the same person, it will attract the attention of the user, and then know that it is a test task and take it seriously, and continue to brush points for other questions.
本实施例中,众包平台还会记录发送给每一个众包账户的测试任务编号,以防止同样的测试任务发送给同一个众包账户多次。具体的,每一道测任务设置一个编号,记录每一个众包账户回答过的每一道测试任务的编号,如果需要发送测试任务给众包账户题时,则获取该众包账户已经测试过的测试任务编号,然后选择与其不同编号的测试任务发送给用户进行测试。In this embodiment, the crowdsourcing platform also records the test task number sent to each crowdsourcing account to prevent the same test task from being sent to the same crowdsourced account multiple times. Specifically, each test task sets a number, records the number of each test task that each crowdsourcing account has answered, and if it needs to send a test task to the crowdsourcing account question, obtain the test that the crowdsourcing account has been tested. The task number, and then select the test task with its different number to send to the user for testing.
本实施例中,众包平台连续多天,在每一天的不同时间给全部的参与答题的众包账户发送测试任务,然后收集答题情况,分析不同地域的人的答题情况,进而了解不同区域的人的心理状态,从而确定不同区域的众包任务的发送时段。比如,广州地区的用户,每天早晨9点答题最好,下午3点存在严重刷分情况,则在之后的发送任务过程中,广州地区在早上9点左右时,提高发送任务的频率,而在下午3点左右的时候降低发送任务的频率等。In this embodiment, the crowdsourcing platform continuously sends test tasks to all the crowdsourcing accounts participating in the answer at different times of each day, and then collects the answer questions, analyzes the answers of people in different regions, and then understands the different regions. The mental state of the person, thereby determining the time period for the crowdsourcing task in different regions. For example, users in the Guangzhou area have the best answer every morning at 9:00, and there is a serious brushing situation at 3 pm. In the subsequent sending task, the Guangzhou area raises the frequency of sending tasks at around 9:00 in the morning. Reduce the frequency of sending tasks, etc. at around 3 pm.
本本实施例中,还可以对各个众包账户进行长期的监控,针对不同地区在不同季节众包账户的答题情况,进行适当的发送众包任务,比如,广州地区的众包账户在夏天时存在刷分行为严重,那么在夏天时降低给该地区的众包任账户发送众包任务的频率,而在冬天时的答题效率更高,则在冬天提高该地区的众包任务发送频率等。In this embodiment, it is also possible to perform long-term monitoring on each crowdsourcing account, and perform appropriate crowdsourcing tasks for the answering of crowdsourced accounts in different seasons in different regions. For example, the crowdsourcing account in Guangzhou exists in the summer. If the brushing behavior is serious, then in the summer, the frequency of crowdsourcing tasks for the crowdsourcing account in the region will be reduced, and in the winter, the answering efficiency will be higher, and in the winter, the frequency of crowdsourcing tasks in the region will be increased.
本实施例中,上述众包平台的作业方法还包括:在众包任务之间穿插发送指定内容的短文。即众包平台在发送众包任务的过程中,还会在众包任务之间穿插发送短文,该短文可以是笑话短文、心灵鸡汤短文等供用户缓解疲劳。进一步地,在笑话或心灵鸡汤等短文界面会设置相应的下一题或退出阅读等按钮,记录用户接收到笑话短文、心灵鸡汤短文等短文后,触发答题(点击下一题或退出阅读等按钮)的时间长度,进而判断出用户的阅读喜好,比如,用户遇到冷笑话短文即会快速点击下“退出阅读”等,那么一段时间的分析后,即不会再发送冷笑话给该用户。In this embodiment, the working method of the crowdsourcing platform further includes: interspersing a short text of the specified content between the crowdsourcing tasks. That is, in the process of sending crowdsourcing tasks, the crowdsourcing platform also intersperses and sends short essays between crowdsourcing tasks. The essays can be joke essays, soul chicken essays, etc. for users to alleviate fatigue. Further, in the short text interface such as joke or soul chicken soup, the corresponding next question or exit reading button will be set, and the user will receive a short message such as a joke essay or a soul chicken essay, and trigger the answer (click the next question or exit the reading button). The length of time, in turn, to determine the user's reading preferences, for example, the user will quickly click on the "exit reading" when the user encounters a cold joke essay, then after a period of analysis, no more jokes will be sent to the user.
本发明实施例的众包平台的作业方法,因为测试任务和常规的众包任务的模式相同,所以众包账户的作业人员不会知道众包平台发送的任务中,哪一个是测试任务,哪一个是众包任务,因此,众包平台能无感知地测试作业人员是否进行刷分等行为,并通过此方法持续让用户处于认真做题的状态之中,使众包平台能够最大程度地获取有价值的任务答案。The working method of the crowdsourcing platform of the embodiment of the present invention, because the test task and the regular crowdsourcing task have the same mode, the operator of the crowdsourcing account does not know which one of the tasks sent by the crowdsourcing platform is a test task, which One is the crowdsourcing task. Therefore, the crowdsourcing platform can test whether the operator performs the brushing and so on without perceptually, and through this method, the user can continue to be in the state of serious problem making, so that the crowdsourcing platform can obtain the maximum degree. Valuable task answers.
参照图4,本发明实施例还提供一种众包平台的作业装置,包括:Referring to FIG. 4, an embodiment of the present invention further provides a working device of a crowdsourcing platform, including:
发送测试单元10,用于按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;The sending test unit 10 is configured to send a test task to the crowdsourcing account according to a preset sending rule; wherein the test task displays an incorrect answer in the crowdsourcing account, the test task and a conventional crowdsourcing task The same pattern;
接收比对单元20,用于接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;The receiving comparison unit 20 is configured to receive feedback information of the crowdsourcing account for the test task, and compare the feedback information with a preset correct answer to determine whether the answer of the crowdsourcing account is correct;
执行动作单元30,用于若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。Executing the action unit 30, if the answer to the crowdsourcing account is incorrect, then retrieving other test tasks in the test task library to continue to send to the crowdsourcing account until the crowdsourcing account answers correctly, and then sending the crowdsourcing The task is to the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
在上述发送测试单元10中,上述测试任务是指存储在测试任务库中的预设任务,该测试任务包含任务试题、错误答案和正确答案,在发送给众包账户时,将任务试题和错误答案一起发送,而正确答案则用于与众包账户的反馈信息进行比较。上述的众包账户是指针对众包平台注册的账户,其可以接收众包平台发布的任务,并对众包任务进行处理,进而得到积分等奖励,在结算时可以根据积分的数量获得相应的金钱等奖励。上述常规的众包任务是指发布众包任务的企业等实际需要众包的任务,如发送图片识别的任务给众包账户,以收集图片识别的准确性的信息等。本实施例中,测试任务与众包任务的模式相同,可以使众包账户对应的用户无感知的进行测试,比如,众包任务的模式为一张图片和一条对图片的解释文字,那么测试任务同样是一张图片和一条对图片的解释文字,区别在于,众包任务中对图片的解释文字可能是正确的,也可能是错误的,并且没有预设的正确答案,而测试任务中对图片的解释文字可能是一定是错误的,并且有预设的正确答案。In the above sending test unit 10, the above test task refers to a preset task stored in the test task library, the test task includes a task test question, a wrong answer and a correct answer, and the task test questions and errors are sent to the crowdsourcing account. The answers are sent together, and the correct answer is used to compare the feedback with the crowdsourced account. The above-mentioned crowdsourcing account is an account registered with the crowdsourcing platform, which can receive the tasks issued by the crowdsourcing platform, and process the crowdsourcing tasks, thereby obtaining rewards such as points, and can obtain corresponding points according to the number of points at the time of settlement. Money and other rewards. The above-mentioned conventional crowdsourcing task refers to a task such as an enterprise that issues a crowdsourcing task and actually needs crowdsourcing, such as sending a picture recognition task to a crowdsourcing account, to collect information on the accuracy of picture recognition, and the like. In this embodiment, the test task has the same mode as the crowdsourcing task, and the user corresponding to the crowdsourcing account can be tested without perception. For example, the mode of the crowdsourcing task is a picture and an explanatory text for the picture, then the test The task is also a picture and an explanatory text for the picture. The difference is that the interpretation of the picture in the crowdsourcing task may be correct, it may be wrong, and there is no preset correct answer, and the test task is correct. The explanatory text of the picture may be wrong and has the correct correct answer.
在上述接收比对单元20中,上述反馈信息即为用户针对测试任务的回答信息,回答信息包括多种情况,第一种是,用户认为众包账户上针对测试任务体显示的答案是正确的,进而直接点击“下一题”的反馈;第二种是,用户发觉众包账户上针对测试任务体显示的答案是错误的,进而进行修改,然后将修改后的答案提交给众包平台,然后点击“下一题”或直接进入下一题的反馈等。本实施例中,上述预设的正确答案,是指众包平台针对任务试题设置的正确答案。在上述“将反馈信息与预设的正确答案进行比对”的过程中,一般涉及到文字语义分析等技术,反馈信息表达的意思与正确答案表达的意思相同即可认为众包账户回答正确的,无需一字不差的完全相同,否则,认为众包账户回答错误。In the above-mentioned receiving comparison unit 20, the feedback information is the answer information of the user for the test task, and the answer information includes various situations. The first type is that the user thinks that the answer displayed on the crowdsourcing account for the test task body is correct. And then directly click on the feedback of the "next question"; the second is that the user finds that the answer displayed on the crowdsourced account for the test task body is wrong, and then modifies, and then submits the modified answer to the crowdsourcing platform. Then click on "Next Question" or go directly to the feedback on the next question. In this embodiment, the preset correct answer refers to the correct answer set by the crowdsourcing platform for the task test. In the above process of “comparing the feedback information with the preset correct answer”, it generally involves techniques such as semantic analysis of the words. The meaning of the feedback information is the same as the meaning of the correct answer, and the crowdsourcing account is correctly answered. No need to be exactly the same word, otherwise, think that the crowdsourcing account answers incorrectly.
在上述执行动作单元30中,上述的比对结果即为回答正确或者回答错误两种结果。上述测试任务库中存储有大量的不同测试任务。当发送给众包账户一个测试任务后,众包账户的反馈答案是错误的,则会继续从测试任务库中调取其他的测试任务发送给众包账户,直到所述众包账户回答正确,然后发送所述众包任务给所述众包账户。通过连续的测试,可以判断出众包账户是存在刷分行为,还是不小心输入错误的答案等,比如,用户连续输入错的答案,则说明众包账户存在刷分行为;如果众包账户只是第一道测试任务做错了,而第二道测试任务做对了,则说明众包账户的用户只是做错题了,而不是故意刷分等。为了提高众包任务的回答准确性,如果用户存在刷分行为,则停止发送给该众包账户众包任务。在其它实施例中,还会根据比对结果以及预设的作业规则执行相应的动作,上述预设的作业规则,是指众包平台设置的相关规则,主要是针对比对结果进行设置,比如,如果回答正确,则继续发送测试任务,如果连续正确回答测试任务指定次数,则提高发送众包任务的频率等规则。上述执行相应的动作,即众包平台按照作业规则进行作业,如作业规则是“回答错误,则继续发送测试任务”,那么当众包账户的反馈信息与预设的正确答案不同,则执行“继续发送测试任务”的动作。In the above-described execution action unit 30, the above-mentioned comparison result is the result of the answer correctly or the answer error. A large number of different test tasks are stored in the above test task library. After sending a test task to the crowdsourcing account, the feedback answer of the crowdsourcing account is wrong, and then the other test tasks are transferred from the test task library to the crowdsourcing account until the crowdsourcing account answers correctly. The crowdsourcing task is then sent to the crowdsourced account. Through continuous testing, it can be judged whether the crowdsourcing account has the brushing behavior, or accidentally input the wrong answer. For example, if the user inputs the wrong answer continuously, it means that the crowdsourcing account has a brushing behavior; if the crowdsourcing account is only the first A test task is wrong, and the second test task is correct. It means that the user of the crowdsourcing account just made the wrong question, instead of deliberately brushing the score. In order to improve the answer accuracy of the crowdsourcing task, if the user has a brushing behavior, the sending to the crowdsourcing account crowdsourcing task is stopped. In other embodiments, the corresponding action is also performed according to the comparison result and the preset job rule. The preset job rule refers to a related rule set by the crowdsourcing platform, and is mainly for setting the comparison result, for example, If the answer is correct, continue to send the test task. If the test task is specified correctly for consecutively, the rules for the frequency of sending the crowdsourcing task are increased. The above-mentioned corresponding action is performed, that is, the crowdsourcing platform performs the operation according to the job rule. If the job rule is “An error is answered, the test task is continued to be sent”, then when the feedback information of the crowdsourcing account is different from the preset correct answer, the execution is continued. Send test task action.
本实施例中,上述众包平台计算众包任务的答案的方法为交叉比对法,具体如下:In this embodiment, the method for calculating the answer of the crowdsourcing task by the crowdsourcing platform is a cross comparison method, which is as follows:
当众包平台派发给n个众包账户同一道众包任务进行验证,得到答案A1 ,A2,…… An,若存在相同答案A1=
A2 =Ax,则记B1=1+1+1=3,以此类推共有k个不同答案,得到B1,B2,……Bk。若B1,B2,……Bk 中有且只有一个最大值B,且 > 50%,则校验成功,众包平台采用B对应的多数答案。反之,校验失败,系统将重新派发该众包任务等。而对测试任务则直接与预设的正确答案进行比对,无需进行交叉比对等动作。上述的交叉比对法,众包平台无需准备正确的答案,只需要在多个答案中查找出相同答案数量最多的一个答案即可,该相同答案数量最多的答案即会被认为是正确的答案。When the crowdsourcing platform distributes the same crowdsourcing task to n crowdsourcing accounts for verification, get the answers A1, A2, ... An, if there is the same answer A1=
A2 = Ax, then B1 = 1 + 1 + 1 = 3, and so on, a total of k different answers, get B1, B2, ... Bk. If there is one and only one maximum B in B1, B2, ... Bk, and > 50%, the verification is successful, and the crowdsourcing platform adopts the majority answer corresponding to B. Conversely, if the verification fails, the system will re-deliver the crowdsourcing task and so on. The test task is directly compared with the preset correct answer, and no cross comparison is required. With the above cross-matching method, the crowdsourcing platform does not need to prepare the correct answer. It only needs to find the most answers with the same number of answers in multiple answers. The answer with the largest number of identical answers will be regarded as the correct answer. .
本实施例中,上述预设的发送规则,包括:每发送指定数量的众包任务后,发送一道测试任务。比如,每发送10道众包任务后,发送一道测试任务。In this embodiment, the foregoing preset sending rule includes: sending a test task after sending a specified number of crowdsourcing tasks. For example, after sending 10 crowdsourcing tasks, send a test task.
在另一实施例中,上述预设的发送规则,包括:按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务。比如,共有20道众包任务,那边么可以设置4道测试任务,这4道测试任务随机穿插在20道众包任务中进行发送。In another embodiment, the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks in a random distribution. For example, there are a total of 20 crowdsourcing tasks, and there are 4 test tasks that can be set up. These 4 test tasks are randomly interspersed in 20 crowdsourcing tasks for sending.
在又一实施例中,上述预设的发送规则,包括:按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。比如,共有100道众包任务,设置有20道测试任务,在发送前10次的众包任务中穿插5道测试任务,在后90次中随机穿插其余的15道测试任务。在发送众包任务的开始阶段,相对密集地随机发送所述测试任务,可以及时发现众包账户对应的用户是否认真负责,比如,众包账户在做测试任务时都是回答错误,显然,该用户不适合完成该类型的众包任务,或者是存在刷分行为,进而停止继续给该用户发送众包任务。In still another embodiment, the foregoing preset sending rule includes: setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks relatively densely at a beginning stage of sending the crowdsourcing task . For example, there are 100 crowdsourcing tasks, 20 test tasks are set up, 5 test tasks are interspersed in the top 10 crowdsourcing tasks, and the remaining 15 test tasks are randomly interspersed in the last 90 times. In the initial stage of sending the crowdsourcing task, the test task is sent relatively randomly, and the user corresponding to the crowdsourcing account can be found to be seriously responsible in time. For example, the crowdsourcing account answers the mistake when performing the test task. Obviously, the The user is not suitable to complete this type of crowdsourcing task, or there is a brushing behavior, and then stop sending the crowdsourcing task to the user.
在一具体实施例中,上述预设的发送规则如下表所示:In a specific embodiment, the foregoing preset transmission rules are as follows:
测试任务和众包任务总数N Total number of test tasks and crowdsourcing tasks N | 测试任务分布 Test task distribution |
N=[0-15] N=[0-15] | 无测试任务 No test task |
N=[16-40] N=[16-40] | 随机分布5道测试任务(共5道测试任务) Randomly distributed 5 test tasks (5 test tasks in total) |
N=[41-100] N=[41-100] | 每15道分布一道测试任务(共4道测试任务) One test task per 15 channels (a total of 4 test tasks) |
N=[101-200] N=[101-200] | 每25道分布一道测试任务(共4道测试任务) One test task per 25 channels (a total of 4 test tasks) |
N=[201-400] N=[201-400] | 每20道分布一道测试任务(共10道测试任务) One test task per 20 channels (10 test tasks in total) |
N>400 N>400 | 重复上述测试任务分布,值不发生变化 Repeat the above test task distribution, the value does not change |
参照图5,本实施例中,上述众包平台的作业装置,还包括:Referring to FIG. 5, in the embodiment, the working device of the above-mentioned crowdsourcing platform further includes:
第一记录单元311,用于记录第一指定时间长度内,所述众包账户回答错误的累计答错数和/或连续答错数;a first recording unit 311, configured to record a cumulative error number and/or a continuous error number of the crowdsourcing account answer error within a first specified time period;
惩罚单元312,用于根据所述累计答错数和/或连续答错数,对所述众包账户进行惩罚。The punishment unit 312 is configured to punish the crowdsourcing account according to the accumulated error number and/or the number of consecutive error answers.
在上述第一记录单元311中,上述第一指定时间长度一般为几个小时或者一天的时间,即每个众包账户从开始完成众包任务至结算时结束的时间长度。众包任务一般都是一天一结算的。上述累计答错数是指众包账户的从一开始做众包任务直至当天的结算为止,一共答错测试任务的次数。上述连续答错数是指用众包账户连续答错测试任务的数量,比如,用户连续答错5道测试任务,则连续答错数为5。本实施例中,可以同时记录累计答错数和连续答错数,也可以只记录一种答错数,可以根据要求进行设置。In the first recording unit 311 described above, the first specified time length is generally several hours or one day, that is, the length of time each crowdsourcing account ends from the start of the crowdsourcing task to the settlement. Crowdsourcing tasks are generally settled on a daily basis. The above cumulative number of errors and answers refers to the number of times the crowdsourcing account has answered the wrong test tasks from the beginning of the crowdsourcing task until the settlement of the day. The above number of consecutive error-correcting numbers refers to the number of consecutive error-testing tasks using the crowdsourcing account. For example, if the user continuously answers the wrong five test tasks, the number of consecutive error-answering tasks is 5. In this embodiment, the cumulative number of errors and the number of consecutive errors can be recorded at the same time, or only one type of error can be recorded, which can be set according to requirements.
在上述惩罚单元312中,会设置惩罚规则,而惩罚规则与累计答错数和/或连续答错数有关。In the above-mentioned penalty unit 312, a penalty rule is set, and the penalty rule is related to the cumulative number of errors and/or the number of consecutive answers.
在一具体实施例中,测试任务的答错数计算有两个维度:累计答错数(K)以及连续答错数(Q),其中K={0,1,2,3,4,5},Q=={0,1,2,3,4}。规则是:每当众包账户做错一道测试任务时,K的计数加1,Q的计数加1;每当众包账户做对一道测试任务时,K的计数不变,Q的计数重置清零。实现在最短的时间内将不良用户摒除在众包平台之外并对应给出不同力度的惩罚措施。具体计数和惩罚过程如下:In a specific embodiment, the error number calculation of the test task has two dimensions: a cumulative error number (K) and a continuous error number (Q), where K = {0, 1, 2, 3, 4, 5 },Q=={0,1,2,3,4}. The rule is: whenever the crowdsourcing account makes a wrong test task, the count of K is increased by 1, and the count of Q is incremented by one; whenever the crowdsourcing account is doing a test task, the count of K is unchanged, and the count of Q is reset. . Achieve the elimination of bad users in the shortest time and out of the crowdsourcing platform and correspondingly give different penalties. The specific counting and punishment process is as follows:
1、当众包账户答错第一道测试任务时,标记K=1,Q=1(K、Q的初始值为0);1. When the crowdsourcing account answers the wrong test task, mark K=1, Q=1 (the initial value of K and Q is 0);
2、由于当众包账户在任意一道测试任务出错时,其所做的下一题也会是测试人任务,此时:(1)若众包账户答对第二道测试任务(即未连续答错测试任务),K的计数不变,Q重置为0,即此时K=1,Q=0;(2)若众包账户再次答错测试任务(即连错两题),同时标记错题数K=K+1,Q=Q+1,即此时K=2,Q=2;2. Because when the crowdsourcing account fails in any one of the test tasks, the next question it will be will be the tester's task. At this time: (1) If the crowdsourcing account answers the second test task (ie, the error is not continuously answered) Test task), the count of K is unchanged, Q is reset to 0, that is, K=1, Q=0 at this time; (2) If the crowdsourcing account answers the wrong test task again (that is, two questions are wrong), and the error is marked at the same time. The number of questions K=K+1, Q=Q+1, that is, K=2 and Q=2 at this time;
3、按照以上规则,每当用户完成一道测试任务,实时更新K与Q的计数。3. According to the above rules, whenever the user completes a test task, the K and Q counts are updated in real time.
4、K、Q的不同计数分别对应不同的惩罚系数DK、DQ值,具体对应关系分别见如下两个表格:4. The different counts of K and Q correspond to different penalty coefficients DK and DQ, respectively. The specific correspondences are shown in the following two tables:
K与DK值对应关系Correspondence between K and DK values
K K | 0 0 | 1 1 | 2 2 | 3 3 | 4 4 | 5 5 |
DK DK | 0 0 | 0 0 | 0 0 | 0.4 0.4 | 0.8 0.8 | 1 1 |
Q与DQ值对应关系Correspondence between Q and DQ values
Q Q | 0 0 | 1 1 | 2 2 | 3 3 | 4 4 |
DQ DQ | 0 0 | 0 0 | 0.2 0.2 | 0.8 0.8 | 1 1 |
DK,DQ的分值代表众包平台将扣除众包账户当天的任务积分的比例,取两者的最大值作为惩罚系数的当前有效值。例如:当DK=0.4,DQ=0.8, 系统将取0.8为有效值,扣除用户当天80%的任务积分作为惩罚。此外,DK=1时,系统除了扣除众包账户当天所做的所有任务积分之外,还会将众包账户今天的任务完成单数清零,重新计数;DQ=1时,系统除了清零任务单数与任务积分之外,会在一个小时之内暂停对该刷分用户的任务派发。The score of DK and DQ means that the crowdsourcing platform will deduct the proportion of the task points on the day of the crowdsourcing account, and take the maximum value of the two as the current effective value of the penalty factor. For example, when DK=0.4, DQ=0.8, the system will take 0.8 as the effective value, deducting 80% of the user's task points for the day as a penalty. In addition, when DK=1, the system will clear all the task scores of the crowdsourcing account on the day of the crowdsourcing account, and re-count the number of tasks completed today; In addition to the singular and task points, the task assignment to the user of the brush is suspended within one hour.
参照图6,本实施例中,上述众包平台的作业装置还包括:Referring to FIG. 6, in the embodiment, the working device of the above-mentioned crowdsourcing platform further includes:
第二记录单元313,用于记录第二指定时间长度内,所述众包账户任务作业情况;其中,所述第二指定时间长度大于所述第一指定时间长度,并且所述第二指定时间长度是所述第一指定时间长度的正整数倍,所述作业情况包括所述众包账户在多个所述第一时间长度内的累计答错数和/或连续答错数;a second recording unit 313, configured to record a crowdsourcing account task operation situation within a second specified time length; wherein the second specified time length is greater than the first specified time length, and the second specified time The length is a positive integer multiple of the first specified length of time, and the job situation includes a cumulative number of error corrections and/or consecutive error corrections of the crowdsourced account over a plurality of the first length of time;
发送控制单元314,用于根据所述众包账户任务作业情况,控制发送给所述众包账户的发送频率。The sending control unit 314 is configured to control a sending frequency sent to the crowdsourcing account according to the crowdsourcing account task operation situation.
在上述第二记录单元313和发送控制单元314中,即为记录上述众包账户的作业情况的单元,其根据作业情况控制众包任务的发送频率。比如,某人在1个月内出现多次刷分情况,则降低发送给该用户任务的频率,当刷分情况严重到指定程度(比如连续5天都存在刷分情况)后,禁封该人的任务账号等。可以进一步地防止众包平台的刷分情况。The second recording unit 313 and the transmission control unit 314 are units for recording the operation status of the above-mentioned crowdsourcing account, and control the transmission frequency of the crowdsourcing task according to the work situation. For example, if a person has multiple brushings within one month, the frequency of sending the task to the user is reduced. When the brushing condition is severe to a specified level (for example, if there is a brushing condition for 5 consecutive days), the The person's task account, etc. The brushing situation of the crowdsourcing platform can be further prevented.
参照图7,本实施例中,上述执行动作单元30,包括:Referring to FIG. 7, in the embodiment, the executing action unit 30 includes:
查找子模块3101,用于根据所述众包账户的所属地理区域,查找与所述所属地理区域对应的第一测试任务库;The finding sub-module 3101 is configured to search, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging party belongs;
调取子模块3102,用于在所述第一测试任务库中调取测试任务继续发送给所述众包账户;Retrieving a sub-module 3102, configured to retrieve a test task in the first test task library and continue to send to the crowdsourcing account;
判断子模块3103,用于判断所述第一测试任务库中的测试任务是否全部被使用过一次;The determining sub-module 3103 is configured to determine whether the test tasks in the first test task library are all used once;
替换子模块3104,用于若所述第一测试任务库中的测试任务全部被使用过一次,则与其它所属地理区域对应的第二测试任务库进行互相替换。The replacement sub-module 3104 is configured to replace the second test task library corresponding to the other geographical regions if the test tasks in the first test task library are all used once.
上述测试任务库包括多个,每个测试任务库中设置不同的测试任务,每个测试任务库对应一个地理区域进行发放,比如,A测试任务库中的测试任务只针对广州地区的众包账户发放;当A测试任务库中的测试任务全部被使用一遍之后,与其它区域对应的测试任务库进行替换,如广州的A测试任务库与湖南的B测试任务库进行替换,减少同一测试任务在同一区域内的多次出现,如果同一题被同一人回答两次,则会引起用户的注意,进而知道是测试任务而认真对待,对于其它题则继续刷分。The above test task library includes a plurality of test tasks, and each test task library is set up with a different test task. For example, the test task in the A test task library is only for the crowdsourcing account in the Guangzhou area. Issuance; when the test tasks in the A test task library are all used once, the test task library corresponding to other areas is replaced, such as the Guangzhou A test task library and the Hunan B test task library are replaced, reducing the same test task. Multiple occurrences in the same area, if the same question is answered twice by the same person, it will attract the attention of the user, and then know that it is a test task and take it seriously, and continue to brush points for other questions.
本实施例中,众包平台的作业装置还包括第三记录单元,用于记录发送给每一个众包账户的测试任务编号,以防止同样的测试任务发送给同一个众包账户多次。具体的,每一道测任务设置一个编号,记录每一个众包账户回答过的每一道测试任务的编号,如果需要发送测试任务给众包账户题时,则获取该众包账户已经测试过的测试任务编号,然后选择与其不同编号的测试任务发送给用户进行测试。In this embodiment, the job device of the crowdsourcing platform further includes a third recording unit for recording the test task number sent to each crowdsourcing account to prevent the same test task from being sent to the same crowdsourcing account multiple times. Specifically, each test task sets a number, records the number of each test task that each crowdsourcing account has answered, and if it needs to send a test task to the crowdsourcing account question, obtain the test that the crowdsourcing account has been tested. The task number, and then select the test task with its different number to send to the user for testing.
本实施例中,众包平台的作业装置还包括发送收集单元,用于在连续多天,在每一天的不同时间给全部的参与答题的众包账户发送测试任务,然后收集答题情况,分析不同地域的人的答题情况,进而了解不同区域的人的心理状态,从而确定不同区域的众包任务的发送时段。比如,广州地区的用户,每天早晨9点答题最好,下午3点存在严重刷分情况,则在之后的发送任务过程中,广州地区在早上9点左右时,提高发送任务的频率,而在下午3点左右的时候降低发送任务的频率等。In this embodiment, the job device of the crowdsourcing platform further includes a sending and collecting unit, configured to send a test task to all the crowdsourced accounts participating in the question at different times of each day for consecutive days, and then collect the answer questions and analyze different The situation of people in the region, and then understand the psychological state of people in different regions, so as to determine the sending time of crowdsourcing tasks in different regions. For example, users in the Guangzhou area have the best answer every morning at 9:00, and there is a serious brushing situation at 3 pm. In the subsequent sending task, the Guangzhou area raises the frequency of sending tasks at around 9:00 in the morning. Reduce the frequency of sending tasks, etc. at around 3 pm.
本本实施例中,还可以对各个众包账户进行长期的监控,针对不同地区在不同季节众包账户的答题情况,进行适当的发送众包任务,比如,广州地区的众包账户在夏天时存在刷分行为严重,那么在夏天时降低给该地区的众包任账户发送众包任务的频率,而在冬天时的答题效率更高,则在冬天提高该地区的众包任务发送频率等。In this embodiment, it is also possible to perform long-term monitoring on each crowdsourcing account, and perform appropriate crowdsourcing tasks for the answering of crowdsourced accounts in different seasons in different regions. For example, the crowdsourcing account in Guangzhou exists in the summer. If the brushing behavior is serious, then in the summer, the frequency of crowdsourcing tasks for the crowdsourcing account in the region will be reduced, and in the winter, the answering efficiency will be higher, and in the winter, the frequency of crowdsourcing tasks in the region will be increased.
本实施例中,众包平台的作业装置还包括短文单元,用于在众包任务之间穿插发送指定内容的短文。即在发送众包任务的过程中,在众包任务之间穿插发送短文,该短文可以是笑话短文、心灵鸡汤短文等供用户缓解疲劳。进一步地,众包平台的作业装置还包括监控单元,用于在笑话或心灵鸡汤等短文界面会设置相应的下一题或退出阅读等按钮,记录用户接收到笑话短文、心灵鸡汤短文等短文后,触发答题(点击下一题或退出阅读等按钮)的时间长度,进而判断出用户的阅读喜好,比如,用户遇到冷笑话短文即会快速点击下“退出阅读”等,那么一段时间的分析后,即不会再发送冷笑话给该用户。In this embodiment, the job device of the crowdsourcing platform further includes a short text unit for interspersing the short text of the specified content between the crowdsourcing tasks. That is, in the process of sending the crowdsourcing task, the short message is interspersed between the crowdsourcing tasks, and the short text can be a joke short text, a soul chicken soup essay, etc. for the user to relieve fatigue. Further, the operation device of the crowdsourcing platform further includes a monitoring unit, which is configured to set a corresponding next question or exit reading button in a short text interface such as a joke or a chicken soup, and record the short text after the user receives the joke short text and the soul chicken soup essay. , the length of time that triggers the answer (click the next question or exit the button), and then judge the user's reading preferences. For example, if the user encounters a cold joke essay, he will quickly click “exit reading”, then analyze for a period of time. After that, no more jokes will be sent to the user.
参照图8,本发明实施例中还提供一种计算机设备,该计算机设备可以是服务器,其内部结构可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设计的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储测试任务等数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现上述各方法实施例的流程。Referring to FIG. 8, a computer device is also provided in the embodiment of the present invention. The computer device may be a server, and its internal structure may be as shown in FIG. 8. The computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the computer designed processor is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium, an internal memory. The non-volatile storage medium stores an operating system, computer readable instructions, and a database. The memory provides an environment for the operation of operating systems and computer readable instructions in a non-volatile storage medium. The database of the computer device is used to store data such as test tasks. The network interface of the computer device is used to communicate with an external terminal via a network connection. The computer readable instructions are executed by a processor to implement the processes of the various method embodiments described above.
本实施例中,还提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述各方法实施例的流程。In this embodiment, a computer non-volatile readable storage medium is further provided, where the computer readable instructions are stored, and when the computer readable instructions are executed by the processor, the processes of the foregoing method embodiments are implemented.
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above is only the preferred embodiment of the present invention, and is not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformations made by the description of the invention and the drawings are directly or indirectly applied to other related The technical field is equally included in the scope of patent protection of the present invention.
Claims (20)
- 一种众包平台的作业方法,其特征在于,包括:A method for operating a crowdsourcing platform, comprising:按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test task to the crowdsourcing account according to a preset sending rule; wherein the answer displayed by the test task in the crowdsourcing account is a wrong answer, and the test task is in the same mode as a conventional crowdsourcing task;接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving feedback information of the crowdsourcing account for the test task, and comparing the feedback information with a preset correct answer, and determining whether the answer of the crowdsourcing account is correct;若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。If the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the crowdsourcing account. Wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- 根据权利要求1所述的众包平台的作业方法,其特征在于,所述预设的发送规则,包括:The method for operating a crowdsourcing platform according to claim 1, wherein the preset sending rule comprises:每发送指定数量的众包任务后,发送一道测试任务;或者,Send a test task after each specified number of crowdsourcing tasks are sent; or,按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务;或者;Setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks randomly; or;按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。According to the total number of crowdsourcing tasks, a corresponding number of test tasks are set, and at the beginning of the sending crowdsourcing task, the test tasks are relatively randomly and randomly transmitted.
- 根据权利要求1所述的众包平台的作业方法,其特征在于,所述若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户的步骤之后,包括:The method for operating a crowdsourcing platform according to claim 1, wherein if the answer to the crowdsourcing account is incorrect, another test task is retrieved from the test task library and sent to the crowdsourcing account. Until the crowdsourcing account answers correctly, then the crowdsourcing task is sent to the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong specified number of times, the step of suspending the sending of the crowdsourcing task and the testing task to the crowdsourcing account is suspended ,include:记录第一指定时间长度内,所述众包账户回答错误的累计答错数和/或连续答错数;Recording the cumulative number of incorrect answers and/or consecutive incorrect answers in the crowdsourced account within the first specified length of time;根据所述累计答错数和/或连续答错数,对所述众包账户进行惩罚。The crowdsourced account is penalized based on the accumulated number of errors and/or consecutive number of answers.
- 根据权利要求3所述的众包平台的作业方法,其特征在于,所述记录第一指定时间长度内所述众包账户回答错误的累计答错数和/或连续答错数的步骤之后,包括:The method for operating a crowdsourcing platform according to claim 3, wherein said step of recording the cumulative number of error corrections and/or consecutive number of error answers of said crowdsourced account answer error within said first specified length of time include:记录第二指定时间长度内,所述众包账户任务作业情况;其中,所述第二指定时间长度大于所述第一指定时间长度,并且所述第二指定时间长度是所述第一指定时间长度的正整数倍,所述作业情况包括所述众包账户在多个所述第一时间长度内的累计答错数和/或连续答错数;Recording, in the second specified length of time, the crowdsourcing account task operation situation; wherein the second specified time length is greater than the first specified time length, and the second specified time length is the first specified time a positive integer multiple of the length, the job situation including a cumulative number of error corrections and/or consecutive number of errors in the plurality of first time lengths of the crowdsourced account;根据所述众包账户任务作业情况,控制发送给所述众包账户的发送频率。The transmission frequency sent to the crowdsourcing account is controlled according to the crowdsourcing account task operation situation.
- 根据权利要求1所述的众包平台的作业方法,其特征在于,所述在测试任务库中调取其它测试任务继续发送给所述众包账户的步骤,包括:The method for operating a crowdsourcing platform according to claim 1, wherein the step of retrieving another test task from the test task library to continue to be sent to the crowdsourcing account comprises:根据所述众包账户的所属地理区域,查找与所述所属地理区域对应的第一测试任务库;Searching, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging is located;在所述第一测试任务库中调取测试任务继续发送给所述众包账户。Retrieving the test task in the first test task library continues to be sent to the crowdsourcing account.
- 根据权利要求5所述的众包平台的作业方法,其特征在于,所述在所述第一测试任务库中调取测试任务继续发送给所述众包账户的步骤之后,包括:The method for operating a crowdsourcing platform according to claim 5, wherein after the step of retrieving the test task in the first test task library to continue to be sent to the crowdsourcing account, the method comprises:判断所述第一测试任务库中的测试任务是否全部被使用过一次;Determining whether the test tasks in the first test task library are all used once;若是,则与其它所属地理区域对应的第二测试任务库进行互相替换。If yes, the second test task library corresponding to the other geographical regions is replaced with each other.
- 根据权利要求1所述的众包平台的作业方法,其特征在于,所述方法还包括:The method for operating a crowdsourcing platform according to claim 1, wherein the method further comprises:在众包任务之间穿插发送指定内容的短文。Interspersed essays that send specified content between crowdsourcing tasks.
- 一种众包平台的作业装置,其特征在于,包括:A work device for a crowdsourcing platform, comprising:发送测试单元,用于按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test unit, configured to send a test task to the crowdsourcing account according to a preset sending rule; wherein the test task displays an incorrect answer in the crowdsourcing account, and the test task is compared with a conventional crowdsourcing task The same pattern;接收比对单元,用于接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving a comparison unit, configured to receive feedback information of the crowdsourcing account for the test task, and compare the feedback information with a preset correct answer to determine whether the answer of the crowdsourcing account is correct;执行动作单元,用于若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。Executing an action unit, if the answer to the crowdsourcing account is incorrect, retrieving other test tasks in the test task library to continue to send to the crowdsourcing account until the crowdsourcing account answers correctly, and then sending the crowdsourcing task Giving the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong number of designated times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- 根据权利要求8所述的众包平台的作业装置,其特征在于,所述预设的发送规则,包括:The working device of the crowdsourcing platform according to claim 8, wherein the preset sending rule comprises:每发送指定数量的众包任务后,发送一道测试任务;或者,Send a test task after each specified number of crowdsourcing tasks are sent; or,按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务;或者;Setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks randomly; or;按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。According to the total number of crowdsourcing tasks, a corresponding number of test tasks are set, and at the beginning of the sending crowdsourcing task, the test tasks are relatively randomly and randomly transmitted.
- 根据权利要求8所述的众包平台的作业装置,其特征在于,所述众包平台的作业装置,还包括:The working device of the crowdsourcing platform according to claim 8, wherein the working device of the crowdsourcing platform further comprises:第一记录单元,用于记录第一指定时间长度内,所述众包账户回答错误的累计答错数和/或连续答错数;a first recording unit, configured to record a cumulative number of error answers and/or consecutive error corrections of the crowdsourcing account answer error within a first specified time period;惩罚单元,用于根据所述累计答错数和/或连续答错数,对所述众包账户进行惩罚。And a penalty unit, configured to punish the crowdsourced account according to the accumulated error number and/or the number of consecutive answer errors.
- 根据权利要求10所述的众包平台的作业装置,其特征在于,所述众包平台的作业装置,还包括:The working device of the crowdsourcing platform according to claim 10, wherein the working device of the crowdsourcing platform further comprises:第二记录单元,用于记录第二指定时间长度内,所述众包账户任务作业情况;其中,所述第二指定时间长度大于所述第一指定时间长度,并且所述第二指定时间长度是所述第一指定时间长度的正整数倍,所述作业情况包括所述众包账户在多个所述第一时间长度内的累计答错数和/或连续答错数;a second recording unit, configured to record the crowdsourcing account task operation situation within a second specified time length; wherein the second specified time length is greater than the first specified time length, and the second specified time length Is a positive integer multiple of the first specified length of time, the job situation including a cumulative number of error corrections and/or consecutive error corrections of the crowdsourced account over a plurality of the first length of time;发送控制单元,用于根据所述众包账户任务作业情况,控制发送给所述众包账户的发送频率。And a sending control unit, configured to control a sending frequency sent to the crowdsourcing account according to the crowdsourcing account task operation situation.
- 根据权利要求8所述的众包平台的作业装置,其特征在于,所述执行动作单元,包括:The working device of the crowdsourcing platform according to claim 8, wherein the executing the action unit comprises:查找子模块,用于根据所述众包账户的所属地理区域,查找与所述所属地理区域对应的第一测试任务库;a search sub-module, configured to search, according to the geographic area of the crowdsourcing account, a first test task library corresponding to the geographic area to which the belonging geographic area belongs;调取子模块,用于在所述第一测试任务库中调取测试任务继续发送给所述众包账户。Retrieving a sub-module for retrieving a test task in the first test task library to continue to send to the crowdsourcing account.
- 根据权利要求12所述的众包平台的作业装置,其特征在于,所述执行动作单元,还包括:The working device of the crowdsourcing platform according to claim 12, wherein the executing the action unit further comprises:判断子模块,用于判断所述第一测试任务库中的测试任务是否全部被使用过一次;a determining sub-module, configured to determine whether the test tasks in the first test task library are all used once;替换子模块,用于若所述第一测试任务库中的测试任务全部被使用过一次,则与其它所属地理区域对应的第二测试任务库进行互相替换。The sub-module is configured to replace the second test task library corresponding to the other geographical regions if the test tasks in the first test task library are all used once.
- 根据权利要求8所述的众包平台的作业装置,其特征在于,所述众包平台的作业装置,还包括:The working device of the crowdsourcing platform according to claim 8, wherein the working device of the crowdsourcing platform further comprises:短文单元,用于在众包任务之间穿插发送指定内容的短文。A short text unit for interspersing essays that send specified content between crowdsourcing tasks.
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现众包平台的作业方法,该众包平台的作业方法,包括:A computer device comprising a memory and a processor, the memory storing computer readable instructions, wherein the processor implements a job method of a crowdsourcing platform when the computer readable instructions are executed, the crowdsourcing platform Working methods, including:按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test task to the crowdsourcing account according to a preset sending rule; wherein the answer displayed by the test task in the crowdsourcing account is a wrong answer, and the test task is in the same mode as a conventional crowdsourcing task;接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving feedback information of the crowdsourcing account for the test task, and comparing the feedback information with a preset correct answer, and determining whether the answer of the crowdsourcing account is correct;若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。If the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the crowdsourcing account. Wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- 根据权利要求15所述的计算机设备,其特征在于,所述预设的发送规则,包括:The computer device according to claim 15, wherein the preset sending rule comprises:每发送指定数量的众包任务后,发送一道测试任务;或者,Send a test task after each specified number of crowdsourcing tasks are sent; or,按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务;或者;Setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks randomly; or;按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。According to the total number of crowdsourcing tasks, a corresponding number of test tasks are set, and at the beginning of the sending crowdsourcing task, the test tasks are relatively randomly and randomly transmitted.
- 根据权利要求15所述的计算机设备,其特征在于,所述若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户的步骤之后,包括:The computer device according to claim 15, wherein if the answer to the crowdsourcing account is incorrect, another test task is retrieved from the test task library and sent to the crowdsourcing account until the public The package account is answered correctly, and then the crowdsourcing task is sent to the crowdsourcing account; wherein, when the crowdsourcing account continuously answers the wrong specified number of times, the steps of suspending the sending of the crowdsourcing task and the test task to the crowdsourced account include:记录第一指定时间长度内,所述众包账户回答错误的累计答错数和/或连续答错数;Recording the cumulative number of incorrect answers and/or consecutive incorrect answers in the crowdsourced account within the first specified length of time;根据所述累计答错数和/或连续答错数,对所述众包账户进行惩罚。The crowdsourced account is penalized based on the accumulated number of errors and/or consecutive number of answers.
- 根据权利要求17所述的计算机设备,其特征在于,所述记录第一指定时间长度内所述众包账户回答错误的累计答错数和/或连续答错数的步骤之后,包括:The computer apparatus according to claim 17, wherein said step of recording a cumulative number of error corrections and/or consecutive number of error corrections for said crowdsourced account answer error within a first specified length of time comprises:记录第二指定时间长度内,所述众包账户任务作业情况;其中,所述第二指定时间长度大于所述第一指定时间长度,并且所述第二指定时间长度是所述第一指定时间长度的正整数倍,所述作业情况包括所述众包账户在多个所述第一时间长度内的累计答错数和/或连续答错数;Recording, in the second specified length of time, the crowdsourcing account task operation situation; wherein the second specified time length is greater than the first specified time length, and the second specified time length is the first specified time a positive integer multiple of the length, the job situation including a cumulative number of error corrections and/or consecutive number of errors in the plurality of first time lengths of the crowdsourced account;根据所述众包账户任务作业情况,控制发送给所述众包账户的发送频率。The transmission frequency sent to the crowdsourcing account is controlled according to the crowdsourcing account task operation situation.
- 一种计算机非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现众包平台的作业方法,该众包平台的作业方法,包括:A computer non-volatile readable storage medium having stored thereon computer readable instructions, wherein the computer readable instructions are executed by a processor to implement a job method of a crowdsourcing platform, the operation of the crowdsourcing platform Methods, including:按照预设的发送规则发送测试任务给众包账户;其中,所述测试任务在所述众包账户显示的答案是错误答案,所述测试任务与常规的众包任务的模式相同;Sending a test task to the crowdsourcing account according to a preset sending rule; wherein the answer displayed by the test task in the crowdsourcing account is a wrong answer, and the test task is in the same mode as a conventional crowdsourcing task;接收所述众包账户针对所述测试任务的反馈信息,并将所述反馈信息与预设的正确答案进行比对,判断众包账户的回答是否正确;Receiving feedback information of the crowdsourcing account for the test task, and comparing the feedback information with a preset correct answer, and determining whether the answer of the crowdsourcing account is correct;若所述众包账户的回答错误,则在测试任务库中调取其它测试任务继续发送给所述众包账户,直到所述众包账户回答正确,然后发送众包任务给所述众包账户;其中,当众包账户连续回答错误指定次数,则暂停发送众包任务和测试任务给所述众包账户。If the answer to the crowdsourcing account is incorrect, the other test tasks are retrieved from the test task library and sent to the crowdsourcing account until the crowdsourcing account answers correctly, and then the crowdsourcing task is sent to the crowdsourcing account. Wherein, when the crowdsourcing account continuously answers the wrong number of times, the crowdsourcing task and the test task are suspended for the crowdsourcing account.
- 根据权利要求19所述的计算机非易失性可读存储介质,其特征在于,所述预设的发送规则,包括:The computer non-volatile readable storage medium according to claim 19, wherein the preset sending rule comprises:每发送指定数量的众包任务后,发送一道测试任务;或者,Send a test task after each specified number of crowdsourcing tasks are sent; or,按照众包任务的总数量,设置对应数量的测试任务,并且随机分布地发送所述测试任务;或者;Setting a corresponding number of test tasks according to the total number of crowdsourcing tasks, and transmitting the test tasks randomly; or;按照众包任务的总数量,设置对应数量的测试任务,并在发送众包任务的开始阶段,相对密集地随机发送所述测试任务。According to the total number of crowdsourcing tasks, a corresponding number of test tasks are set, and at the beginning of the sending crowdsourcing task, the test tasks are relatively randomly and randomly transmitted.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810343877.9 | 2018-04-17 | ||
CN201810343877.9A CN108734196A (en) | 2018-04-17 | 2018-04-17 | Operational method, device, computer equipment and the storage medium of crowdsourcing platform |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019200736A1 true WO2019200736A1 (en) | 2019-10-24 |
Family
ID=63938988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/095318 WO2019200736A1 (en) | 2018-04-17 | 2018-07-11 | Operating method and device for crowdsourcing platform, computer device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108734196A (en) |
WO (1) | WO2019200736A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598286A (en) * | 2020-12-23 | 2021-04-02 | 作业帮教育科技(北京)有限公司 | Crowdsourcing user cheating behavior detection method and device and electronic equipment |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109582581B (en) * | 2018-11-30 | 2023-08-25 | 平安科技(深圳)有限公司 | Result determining method based on crowdsourcing task and related equipment |
CN111291376B (en) * | 2018-12-08 | 2023-05-05 | 深圳慕智科技有限公司 | Web vulnerability verification method based on crowdsourcing and machine learning |
CN111339231B (en) * | 2020-02-25 | 2024-04-09 | 合肥四维图新科技有限公司 | Crowd-sourced update result processing method and device |
CN113032426A (en) * | 2021-04-08 | 2021-06-25 | 平安科技(深圳)有限公司 | Intelligent verification method, device and equipment for identification result and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104599084A (en) * | 2015-02-12 | 2015-05-06 | 北京航空航天大学 | Crowd calculation quality control method and device |
US20150356488A1 (en) * | 2014-06-09 | 2015-12-10 | Microsoft Corporation | Evaluating Workers in a Crowdsourcing Environment |
CN105184653A (en) * | 2015-09-08 | 2015-12-23 | 苏州大学 | Trust-based crowdsourcing worker screening method for social network |
CN107239689A (en) * | 2017-05-11 | 2017-10-10 | 深圳市华傲数据技术有限公司 | A kind of recognition methods of checking information based on mass-rent and system |
CN107871196A (en) * | 2016-09-28 | 2018-04-03 | 郑州大学 | A kind of mass-rent method for evaluating quality based on slip task window |
-
2018
- 2018-04-17 CN CN201810343877.9A patent/CN108734196A/en not_active Withdrawn
- 2018-07-11 WO PCT/CN2018/095318 patent/WO2019200736A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150356488A1 (en) * | 2014-06-09 | 2015-12-10 | Microsoft Corporation | Evaluating Workers in a Crowdsourcing Environment |
CN104599084A (en) * | 2015-02-12 | 2015-05-06 | 北京航空航天大学 | Crowd calculation quality control method and device |
CN105184653A (en) * | 2015-09-08 | 2015-12-23 | 苏州大学 | Trust-based crowdsourcing worker screening method for social network |
CN107871196A (en) * | 2016-09-28 | 2018-04-03 | 郑州大学 | A kind of mass-rent method for evaluating quality based on slip task window |
CN107239689A (en) * | 2017-05-11 | 2017-10-10 | 深圳市华傲数据技术有限公司 | A kind of recognition methods of checking information based on mass-rent and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598286A (en) * | 2020-12-23 | 2021-04-02 | 作业帮教育科技(北京)有限公司 | Crowdsourcing user cheating behavior detection method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108734196A (en) | 2018-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019200736A1 (en) | Operating method and device for crowdsourcing platform, computer device and storage medium | |
Shaw et al. | Designing incentives for inexpert human raters | |
Krumboltz | An accountability model for counselors | |
JP5320153B2 (en) | Comment output device, comment output method, and program | |
US20090319338A1 (en) | Method and system for virtual mentoring | |
US6341267B1 (en) | Methods, systems and apparatuses for matching individuals with behavioral requirements and for managing providers of services to evaluate or increase individuals' behavioral capabilities | |
US20090233263A1 (en) | System and method for teaching | |
US20050260549A1 (en) | Method of analyzing question responses to select among defined possibilities and means of accomplishing same | |
Hayden et al. | Impact of worry on career thoughts, career decision state, and cognitive information processing skills | |
US20070207449A1 (en) | Method of analyzing question responses to select among defined possibilities and means of accomplishing same | |
US20140308646A1 (en) | Method and System for Creating Interactive Training and Reinforcement Programs | |
US20140045164A1 (en) | Methods and apparatus for assessing and promoting learning | |
Woo | The 2004 user survey at the University of Hong Kong Libraries | |
US20200357297A1 (en) | Systems and Methods for Inquiry-Based Learning Including Collaborative Question Generation | |
CN110569347A (en) | Data processing method and device, storage medium and electronic equipment | |
CN110619772A (en) | Data processing method, device, equipment and medium | |
Wooderson et al. | Evaluating the performance improvement preferences of disability service managers: An exploratory study using Gilbert's behavior engineering model | |
JP5394009B2 (en) | Problem information output device, problem information output method, and program | |
CN112219215A (en) | Action improving system and action improving method | |
Drewitt et al. | Practitioners’ experiences of learning and implementing Counselling for Depression (CfD) in routine practice settings | |
CN114926758A (en) | Method for analyzing classroom student participation | |
Uminski et al. | GenBio-MAPS as a case study to understand and address the effects of test-taking motivation in low-stakes program assessments | |
JP6599534B1 (en) | Information processing apparatus, information processing method, and program | |
US20080299532A1 (en) | Method, system, signal and program product for assuring feedback is received from students of an online course | |
CN111737448A (en) | Question selection method and system based on basic subject short answer of answer duration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18915299 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18915299 Country of ref document: EP Kind code of ref document: A1 |