CN111639969B - Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system - Google Patents
Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system Download PDFInfo
- Publication number
- CN111639969B CN111639969B CN202010466953.2A CN202010466953A CN111639969B CN 111639969 B CN111639969 B CN 111639969B CN 202010466953 A CN202010466953 A CN 202010466953A CN 111639969 B CN111639969 B CN 111639969B
- Authority
- CN
- China
- Prior art keywords
- neural network
- cyclic neural
- user
- network model
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0206—Price or cost determination based on market factors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Entrepreneurship & Innovation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a dynamic incentive calculation method and a system for a crowdsourcing system, wherein the method comprises the steps of obtaining task data of a demander on a crowdsourcing platform and historical task question-answer data of a participating user; assigning tasks to participating users; constructing a cyclic neural network model for each participating user; training a cyclic neural network model according to historical task question-answer data of the participating users; according to the prediction results of the participating users, the tasks and the cyclic neural network model, calculating the final benefit brought by different excitation values, and judging whether the current user is given an excitation value or not; the answers of all participating users are collected to obtain the results required by the demander. When the online profit calculation problem is solved, the invention also provides a simple and efficient algorithm. Simulation experiments prove the high efficiency and the robustness of the invention under complex conditions. Practical experiments on crowdsourcing platforms also show the high efficiency and superiority of the present invention over traditional methods.
Description
Technical Field
The present invention relates to data quality improvement, and more particularly, to a dynamic incentive computation method, system, device and medium for crowdsourcing systems.
Background
In practical applications, there are many problems that human beings can easily complete but machines are difficult to directly complete. For example, giving two pictures of different sharpness, humans can quickly and accurately discern but are difficult for a machine to recognize. The same example is the judgment of natural language emotion clarity. In such a context, crowdsourcing platforms have gained widespread attention and development.
The crowdsourcing platform refers to a network work distribution platform, a demander issues various tasks on the platform, a user browses and selects the tasks, and the demander gives a certain reward according to the work quality submitted by the user.
With the continuous development of crowdsourcing platforms, how to give incentives to improve the quality of question-answer data becomes a key issue. It has been found that appropriate incentives (e.g., reputation, money, etc.) can improve question and answer quality, thereby enhancing the end benefit of the demander. However, too many motivation values may decrease the overall benefit of the demander, and thus the overall quality of the answer data, and too few motivation values may hit the aggressiveness of the user's reply, which may not be accomplished. In such cases, a good incentive mechanism not only requires reasonable incentive for the user to obtain a high quality reply, but also allows the requester to obtain relatively high revenue through the user's answer.
Aiming at the excitation calculation problem on the crowdsourcing platform, students at home and abroad have made some works, but the works have limitations: (1) The excitation mode is single, and the overall data quality improvement effect is limited; (2) Each user who completes the task obtains the same benefits without consideration of the effects of answer quality, user behavior, etc.
Disclosure of Invention
The embodiment of the invention provides a dynamic incentive calculation method, a system, equipment and a medium for a crowdsourcing system, which are used for solving the problem that the traditional scheme is single and does not consider the answer quality, and determining whether to give a certain incentive value to an answer according to the past task question-answer historical data of the answer so as to maximize the quality of task data.
In order to achieve the above purpose, the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a dynamic incentive computation method for a crowdsourcing system, the method comprising the steps of:
acquiring task data of a demander on a crowdsourcing platform and historical task question-answer data of a participating user;
assigning tasks to participating users;
constructing a cyclic neural network model for each participating user;
training a cyclic neural network model according to historical task question-answer data of the participating users;
according to the prediction results of the participating users, the tasks and the cyclic neural network model, calculating the final benefit brought by different excitation values, and judging whether the current user is given an excitation value or not;
the answers of all participating users are collected to obtain the results required by the demander.
Further, the acquired task data includes: the number of tasks, the number of tasks allocated per round, the total set of tasks requiring questions and answers. And the historical task question and answer data of the participating users comprises the number and quality of past task questions and answers of the users.
Further, the recurrent neural network model is a time series model for predicting the output answer quality level of a user at a given excitation value, and is composed of a plurality of fully connected layers, and receives the output of the last time node at each time node to form a recurrent neural network.
Further, the main parameters of the recurrent neural network model are as follows:
1) The demander can decide whether to give the user an incentive value at each time node t, denoted as a t ,a t Giving an excitation value of 1, and judging whether 0 is negative;
2) The output of the cyclic neural network model is the variable y t Representing the probability of the user completing the task to high quality, y t The closer to 1 the higher the quality of the answer, the closer to 0 the lower the quality of the answer;
3) There are multiple hidden states, and the transmission parameters between the input and hidden states, between hidden states and output in the neural network are obtained by training a cyclic neural network model.
Further, the construction steps of the cyclic neural network model are as follows:
the training data set is a historical task question-answer data sequence of the user<a t ,y t >Model parameters are updated and optimized through back propagation of the recurrent neural network, so that the performance of the recurrent neural network on the training data set is optimized. And obtaining a required answer quality evaluation model after repeated iterative training.
Further, according to the prediction results of the participating users, the tasks and the recurrent neural network, calculating the final benefit size brought by different excitation values to judge whether to give the current user the excitation value, including:
for a certain user, giving a corresponding excitation value as input of a cyclic neural network model, and obtaining the predicted answer quality corresponding to the excitation value;
constructing a final benefit function through the predicted answer quality, wherein the final benefit function comprises the benefit for completing the next task and the benefit obtained by completing the future task;
the final benefit obtained by the corresponding incentive value is obtained by solving the corresponding final benefit function, thereby deciding whether to give the current user incentive value.
Further, a heuristic pruning strategy is adopted for solving the corresponding final benefit function, so that the calculation complexity of the model is reduced.
In a second aspect, embodiments of the present invention also provide a dynamic incentive computing system for a crowdsourcing system comprising:
the acquisition module is used for acquiring task data of a demander on the crowdsourcing platform and historical task question-answer data of a participating user;
the allocation module is used for allocating the tasks to the participating users;
the model construction module is used for constructing a cyclic neural network model for each participating user;
the model training module is used for training a cyclic neural network model according to historical task question-answer data of the participating user;
the decision module is used for calculating the final benefit brought by different excitation values according to the prediction results of the participating users, the tasks and the cyclic neural network model and judging whether the current user is given an excitation value or not;
and the result output module is used for collecting answers of all the participating users so as to obtain the result required by the demander.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in the first aspect.
According to the technical scheme, the method comprises the following steps: assigning tasks to users, and establishing a cyclic neural network model for prediction through historical task question-answer data of the users; through the trained model, the method predicts whether the user of the incentive value is given question-answer data quality, and calculates the final benefit therefrom. The method model not only considers the benefit of the next task, but also considers the influence of all tasks completed in the future on the final benefit, and can dynamically determine whether to give an incentive value when the current user accepts the task so as to maximize the final benefit obtained by the demander. Simulation experiments prove the high efficiency and the robustness of the invention under complex conditions. Practical experiments on crowdsourcing platforms also show the high efficiency and superiority of the present invention over traditional methods.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a dynamic incentive computation method for a crowdsourcing system in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a model system in an embodiment of the invention;
FIG. 3 is a schematic diagram of a Recurrent Neural Network (RNN) according to an embodiment of the present invention;
FIG. 4 is a block diagram of a dynamic incentive computing system for a crowdsourcing system in accordance with an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
On the contrary, the invention is intended to cover any alternatives, modifications, equivalents, and variations as may be included within the spirit and scope of the invention as defined by the appended claims. Further, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention.
Example 1
FIG. 1 is a flow chart of a dynamic incentive computation method for a crowdsourcing system, as shown in FIG. 1, comprising the steps of:
step S100: task data submitted by a demander on the crowdsourcing platform and historical task question-answer data of participating users are obtained. The task data comprises a task set O to be completed, a total number of tasks C and a number of tasks C to be completed in each round.
Step S200: according to the task information as above, the tasks of each round are allocated to the user.
Step S300: and constructing a cyclic neural network model for each participating user for prediction according to the historical task question-answer data of the participating user.
Step S301: for each user, a corresponding recurrent neural network RNN model is built, which is a time series model for predicting the output answer quality level of the user at a given stimulus value. The RNN model consists of a plurality of fully connected layers, and receives the output of the last time node at each time node to form a cyclic neural network. The main parameters are as follows:
1) The demander can decide whether to give the user an incentive value at each node t, denoted as a t 。a t If 1, the excitation value is given, and if 0, no.
2) The output of the recurrent neural network model is variabley t Representing the probability of the user completing the task to high quality, y t The closer to 1 the higher the quality of the answer, the closer to 0 the lower the quality of the answer;
3) There are a plurality of hidden states. The transmission parameters U, W, V between the input and hidden states, between the hidden states and between the output in the neural network can be obtained by training a cyclic neural network model.
Step S400: and training a cyclic neural network model according to the historical answer data of the participating users.
Step S401: the steps of training the recurrent neural network model are as follows: the training data set is a historical task question-answer data sequence of the user<a t ,y t >By back-propagation of the recurrent neural network, the transfer parameters of the model are modified and the performance on the training dataset is optimized. The required answer quality prediction model RNN can be obtained after multiple times of training.
Step S500: and calculating the final benefit size brought by different excitation values according to the prediction results of the participating users, the tasks and the cyclic neural network model, and judging whether the current user is given the excitation value.
Step S501: first, a corresponding prediction result is obtained through an RNN model. The cyclic neural network RNNs trained on the training set can be expressed as:
f θ :a→y (1)
where a and y are both L-dimensional vectors representing the probability of whether L tasks give stimulus values and the answer is a high quality answer. Wherein the former L-1 dimension data is the data of the previous question and answer of the user and is defined as the current state s, a L Is the input of the current task. The RNN model can then also be denoted as y (s, a), s being the current state, a being whether the current task gives an incentive value, the output of which is the probability that the current user has a high quality answer at that state and incentive value level.
Step S502: and secondly, constructing a corresponding loss function through a predicted result. The method dynamically decides whether to give an incentive value by calculating the online revenue expectations of the demander. Assume that a user has completed tasks several times and alsoThere are several tasks that are not completed and the utility function of the total profit contains not only the profit to complete the next task, but also the profit to be obtained to complete the future task. In the present state s, the number of work t to be completed in the future n And when the excitation value a is given or not, the gain function is denoted as E [ U (s; a; t) n )]Or E [ U ]]The definition is as follows:
wherein the method comprises the steps of
F(s,a)=[1-y(s,a)]w l +y(s,a)[w h -I(a)b] (3)
G(s′ a,y ,t n -1)=max a′∈{0,1} E[U(s′ a,y ;a′;t n -1)] (4)
I (a) is 1 if and only if a=1, and the other cases are 0.
Step S503: decision making is achieved by maximizing the corresponding benefit function to decide whether to administer the incentive. With the benefit function as above, we can formulate the decision problem, i.e. knowing the current state s, the number of work to be done in the future t n The time is as follows:
a=arg max a∈{0,1} E[U(s;a;t n )] (5)
a=1, and if a=0, the user is not given an excitation value.
The dynamic solution of the revenue function calculation problem can be directly solved by dynamic programming, but the time complexity is exponential. The present solution proposes an efficient heuristic algorithm to solve the above-mentioned problem.
1) Beam-Search algorithm: at some point, there may be a certain number of tasks to be completed, and thus may need to be iterated too many times in order to obtain the final benefit at that point, resulting in excessive computational complexity. Thus for a certain task at the current moment, whether or not it gives an incentive value, we only consider the influence of the previous m maximum benefits when calculating the benefit function, where m is the search width, an artificially determined parameter. The benefits of the other parts are ignored. The Beam-Search is slightly different from the greedy algorithm, which considers the effect of the first m maximum benefits, whereas the greedy algorithm only considers the effect of the first maximum benefits. With the previous recurrent neural network model, the Beam-Search algorithm can calculate different benefits depending on whether the user gives an incentive value or not, thereby making a decision.
Step S600: and collecting answers of the previous tasks, and obtaining results through the answers of all users.
Example two
The present invention also provides an embodiment of a dynamic excitation computing system for a crowdsourcing system, and since the dynamic excitation computing system for a crowdsourcing system provided by the present invention corresponds to the foregoing embodiment of a dynamic excitation computing method for a crowdsourcing system, the dynamic excitation computing system for a crowdsourcing system may implement the purposes of the present invention by executing the steps of the flow in the foregoing method embodiment, and therefore, the explanation in the embodiment of the dynamic excitation computing method for a crowdsourcing system is also applicable to the embodiment of the dynamic excitation computing system, and will not be repeated in the following embodiments of the present invention.
As shown in fig. 4, the present embodiment provides a dynamic incentive computing system for a crowdsourcing system, comprising:
the acquiring module 91 is configured to acquire task data of a demander on the crowdsourcing platform and historical task question-answer data of a participating user;
an allocation module 92 for allocating tasks to participating users;
a model building module 93 for building a recurrent neural network model for each participating user;
the model training module 94 is configured to train the recurrent neural network model according to the historical task question-answer data of the participating user;
the decision module 95 is used for calculating the final benefit brought by different excitation values according to the prediction results of the participating users, the tasks and the cyclic neural network model, and judging whether to give the excitation value to the current user;
and a result output module 96, configured to collect answers of all the participating users to obtain the result required by the demander.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiment of the apparatus is merely exemplary, and for example, the division of the units may be a logic function division, and there may be another division manner when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.
Claims (9)
1. A dynamic incentive computation method for a crowdsourcing system, characterized by: comprising the following steps:
acquiring task data of a demander on a crowdsourcing platform and historical task question-answer data of a participating user;
assigning tasks to participating users;
constructing a cyclic neural network model for each participating user;
training a cyclic neural network model according to historical task question-answer data of the participating users;
according to the prediction results of the participating users, the tasks and the cyclic neural network model, calculating the final benefit brought by different excitation values, and judging whether the current user is given an excitation value or not;
collecting answers of all participating users to obtain a result required by a demander;
according to the prediction results of the participating users, tasks and the cyclic neural network, calculating the final benefit size brought by different excitation values to judge whether the current user is given with the excitation value, including:
for a certain user, giving a corresponding excitation value as input of a cyclic neural network model, and obtaining the predicted answer quality corresponding to the excitation value;
constructing a final benefit function through the predicted answer quality, wherein the final benefit function comprises the benefit for completing the next task and the benefit obtained by completing the future task;
the final benefit obtained by the corresponding incentive value is obtained by solving the corresponding final benefit function, thereby deciding whether to give the current user incentive value.
2. A dynamic incentive computation method for a crowdsourcing system as claimed in claim 1 wherein: the acquired task data comprises: the number of tasks, the number of tasks allocated per round and the total set of tasks requiring questions and answers, and the historical task question and answer data of the participating users comprise the number and quality of the past task questions and answers of the users.
3. A dynamic incentive computation method for a crowdsourcing system as claimed in claim 1 wherein: the cyclic neural network model is a time series model for predicting the output answer quality level of a user at a given excitation value, and consists of a plurality of fully connected layers, and receives the output of the last time node at each time node to form a cyclic neural network.
4. A dynamic incentive computation method for a crowdsourcing system as claimed in claim 1 wherein: the main parameters of the cyclic neural network model are as follows:
1) The demander can decide whether to give the user an incentive value at each time node t, denoted as a t ,a t Giving an excitation value of 1, and judging whether 0 is negative;
2) The output of the cyclic neural network model is the variable y t Representing the probability of the user completing the task to high quality, y t The closer to 1 the higher the quality of the answer, the closer to 0 the lower the quality of the answer;
3) There are multiple hidden states, and the transmission parameters between the input and hidden states, between hidden states and output in the neural network are obtained by training a cyclic neural network model.
5. The method of dynamic incentive computation for a crowdsourcing system of claim 4 wherein: the construction steps of the cyclic neural network model are as follows:
the training data set is a historical task question-answer data sequence of the user<a t ,y t >And updating and optimizing model parameters through back propagation of the cyclic neural network, so that the performance of the model on a training data set is optimized, and a required answer quality evaluation model can be obtained after repeated iterative training.
6. A dynamic incentive computation method for a crowdsourcing system as claimed in claim 1 wherein: and solving the corresponding final benefit function by adopting a heuristic pruning strategy.
7. A dynamic incentive computing system for a crowdsourcing system, comprising:
the acquisition module is used for acquiring task data of a demander on the crowdsourcing platform and historical task question-answer data of a participating user;
the allocation module is used for allocating the tasks to the participating users;
the model construction module is used for constructing a cyclic neural network model for each participating user;
the model training module is used for training a cyclic neural network model according to historical task question-answer data of the participating user;
the decision module is used for calculating the final benefit brought by different excitation values according to the prediction results of the participating users, the tasks and the cyclic neural network model and judging whether the current user is given an excitation value or not;
the result output module is used for collecting answers of all the participating users so as to obtain a result required by a demander;
according to the prediction results of the participating users, tasks and the cyclic neural network, calculating the final benefit size brought by different excitation values to judge whether the current user is given with the excitation value, including:
for a certain user, giving a corresponding excitation value as input of a cyclic neural network model, and obtaining the predicted answer quality corresponding to the excitation value;
constructing a final benefit function through the predicted answer quality, wherein the final benefit function comprises the benefit for completing the next task and the benefit obtained by completing the future task;
the final benefit obtained by the corresponding incentive value is obtained by solving the corresponding final benefit function, thereby deciding whether to give the current user incentive value.
8. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010466953.2A CN111639969B (en) | 2020-05-28 | 2020-05-28 | Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010466953.2A CN111639969B (en) | 2020-05-28 | 2020-05-28 | Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111639969A CN111639969A (en) | 2020-09-08 |
CN111639969B true CN111639969B (en) | 2023-05-26 |
Family
ID=72331212
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010466953.2A Active CN111639969B (en) | 2020-05-28 | 2020-05-28 | Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111639969B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598286A (en) * | 2020-12-23 | 2021-04-02 | 作业帮教育科技(北京)有限公司 | Crowdsourcing user cheating behavior detection method and device and electronic equipment |
CN113379392A (en) * | 2021-06-29 | 2021-09-10 | 中国科学技术大学 | Method for acquiring high-quality data for numerical tasks in crowdsourcing scene |
CN113393056B (en) * | 2021-07-08 | 2022-11-25 | 山东大学 | Crowdsourcing service supply and demand gap prediction method and system based on time sequence |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140180779A1 (en) * | 2012-12-20 | 2014-06-26 | International Business Machines Corporation | Automated incentive computation in crowdsourcing systems |
US9911088B2 (en) * | 2014-05-01 | 2018-03-06 | Microsoft Technology Licensing, Llc | Optimizing task recommendations in context-aware mobile crowdsourcing |
US10467541B2 (en) * | 2016-07-27 | 2019-11-05 | Intuit Inc. | Method and system for improving content searching in a question and answer customer support system by using a crowd-machine learning hybrid predictive model |
CN108596335B (en) * | 2018-04-20 | 2020-04-17 | 浙江大学 | Self-adaptive crowdsourcing method based on deep reinforcement learning |
CN109889388B (en) * | 2019-03-12 | 2022-04-01 | 湖北工业大学 | Reputation theory-based dynamic contract incentive mechanism design method for mobile crowdsourcing network |
CN110264272A (en) * | 2019-06-21 | 2019-09-20 | 山东师范大学 | A kind of mobile Internet labor service crowdsourcing platform task optimal pricing prediction technique, apparatus and system |
-
2020
- 2020-05-28 CN CN202010466953.2A patent/CN111639969B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111639969A (en) | 2020-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111639969B (en) | Dynamic incentive calculation method, system, equipment and medium for crowdsourcing system | |
CN111754000B (en) | Quality-aware edge intelligent federal learning method and system | |
CN110276442B (en) | Searching method and device of neural network architecture | |
CN114169412B (en) | Federal learning model training method for large-scale industry chain privacy calculation | |
CN115357554B (en) | Graph neural network compression method and device, electronic equipment and storage medium | |
CN111523939A (en) | Promotion content delivery method and device, storage medium and electronic equipment | |
CN111682972B (en) | Method and device for updating service prediction model | |
CN112513886A (en) | Information processing method, information processing apparatus, and information processing program | |
CN111888769A (en) | Group recommendation method and device, electronic equipment and storage medium | |
CN107437111A (en) | Data processing method, medium, device and computing device based on neutral net | |
US8914505B2 (en) | Methods and apparatus for tuning a network for optimal performance | |
Amini et al. | A BOA-based adaptive strategy with multi-party perspective for automated multilateral negotiations | |
CN117520495A (en) | Path traversal-based problem reply method and device and computer equipment | |
Huang et al. | An online inference-aided incentive framework for information elicitation without verification | |
CN112470123A (en) | Determining action selection guidelines for an execution device | |
CN117217946A (en) | Method, device, equipment and storage medium for determining propagation influence | |
CN111027709B (en) | Information recommendation method and device, server and storage medium | |
CN113297310B (en) | Method for selecting block chain fragmentation verifier in Internet of things | |
CN112121439B (en) | Intelligent cloud game engine optimization method and device based on reinforcement learning | |
Guo et al. | MTIRL: Multi-trainer interactive reinforcement learning system | |
JP2023044698A (en) | evolutionary computing system | |
CN114282741A (en) | Task decision method, device, equipment and storage medium | |
Zhou et al. | A negotiation protocol with recommendation for multilateral negotiation in trust networks | |
Lu et al. | Functional Optimization Reinforcement Learning for Real-Time Bidding | |
CN117557168A (en) | Bid price clearing simulation method, system, chip and equipment for future power market |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |