CN111460811A - Crowdsourcing task answer verification method and device, computer equipment and storage medium - Google Patents

Crowdsourcing task answer verification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111460811A
CN111460811A CN202010135251.6A CN202010135251A CN111460811A CN 111460811 A CN111460811 A CN 111460811A CN 202010135251 A CN202010135251 A CN 202010135251A CN 111460811 A CN111460811 A CN 111460811A
Authority
CN
China
Prior art keywords
answer
value
answers
preset
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010135251.6A
Other languages
Chinese (zh)
Inventor
王健宗
李佳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010135251.6A priority Critical patent/CN111460811A/en
Publication of CN111460811A publication Critical patent/CN111460811A/en
Priority to PCT/CN2020/117671 priority patent/WO2021174814A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an answer verification method, an answer verification device, computer equipment and a storage medium for crowdsourcing tasks, wherein the method comprises the following steps: obtaining each answer corresponding to the target task as an initial answer, performing semantic recognition on each initial answer, determining a plurality of reference answers according to a semantic recognition result, determining the credibility value of each type of reference answer, obtaining a simulation answer corresponding to the target task through a preset model when the credibility value is not compared with a preset standard value, counting the similarity values of the reference answers and the simulation answers, selecting the similarity value with the maximum value, taking the reference answer corresponding to the similarity value with the maximum value as the target answer, and determining the answer corresponding to the target answer as the answer passing verification. According to the method and the device, the accuracy and the efficiency of the answer verification of the crowdsourcing task can be improved.

Description

Crowdsourcing task answer verification method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an answer verification method and apparatus for a crowdsourcing task, a computer device, and a storage medium.
Background
With the rapid development of network technologies, companies or organizations often issue crowdsourcing tasks to internet objects through the internet in order to obtain more creative information or efficiently and conveniently solve some cross-domain problems, and the crowdsourcing tasks are to distribute internally executed work tasks to external execution objects to complete the tasks in a crowdsourcing task manner, so as to shorten task completion time.
Because different execution objects have different learning modes and different implementation logics, the execution results of different execution objects on the same task are different, and meanwhile, because the execution objects have learning capabilities, the execution objects cannot directly determine which execution objects provide response answers which are certainly better than other execution objects, and the response answers cannot be verified. At present, in order to ensure that accurate task answers are obtained, a crowdsourcing task is generally distributed to a plurality of execution objects, response answers of each execution object are further obtained, and verification of the response answers is performed in a manual screening mode.
Disclosure of Invention
The embodiment of the application aims to provide an answer verification method for a crowdsourcing task, and the problem that answer verification efficiency is low in a mode of manually verifying the crowdsourcing task in the prior art is solved.
In order to solve the above technical problem, an embodiment of the present application provides an answer verification method for a crowdsourcing task, including:
acquiring each response answer corresponding to the target task from all response answers acquired by the client as an initial answer, wherein each response answer corresponds to one response object;
performing semantic recognition on each initial answer through a natural language semantic recognition mode to obtain semantic recognition results of N initial answers, wherein N is the number of the response objects, and N is a positive integer;
combining the semantic recognition results two by two, taking each combination as a group of results, counting similarity values between the semantic recognition results in each group of results in a similarity calculation mode, and taking the two semantic recognition results in the group as reference answers of the same class to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold value, wherein M is less than or equal to N, and N is a positive integer;
determining the credibility value of each type of reference answer through a preset consistency check mode;
selecting the credibility value with the largest value from all the credibility values of the reference answers as the maximum credibility value, and comparing the maximum credibility value with a preset standard value to obtain a comparison result;
and if the comparison result is that the maximum credibility value is greater than or equal to the preset standard value, taking the reference answer corresponding to the maximum credibility value as a target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
Further, the semantic recognition of each initial answer by means of natural language semantic recognition to obtain semantic recognition results of N initial answers includes:
performing word segmentation processing on the initial answer through a preset word segmentation mode to obtain basic words included in the initial answer;
converting the basic word segmentation into word vectors, and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector;
and acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
Further, the word segmentation processing on the initial answer through a preset word segmentation mode to obtain a basic word segmentation included in the initial answer includes:
performing word segmentation analysis on the basic sentence to obtain K word segmentation sequences;
aiming at each word segmentation sequence, calculating the occurrence probability of each word segmentation sequence according to word sequence data of the preset training corpus to obtain the occurrence probability of K word segmentation sequences;
and selecting the word segmentation sequence corresponding to the occurrence probability reaching a preset probability threshold from the occurrence probabilities of the K word segmentation sequences as a target word segmentation sequence, and taking each word segmentation in the target word segmentation sequence as a basic word segmentation contained in the initial answer.
Further, in the reliability values of all the reference answers, a reliability value with a maximum value is selected as a maximum reliability value, and the maximum reliability value is compared with a preset standard value to obtain a comparison result, and after the comparison result is obtained, the answer verification method for the crowdsourcing task further includes:
if the comparison result is that the maximum credibility value is smaller than the preset standard value, acquiring the target task, inputting the target task into a preset model, and obtaining a simulation answer through the preset model;
and for each type of the reference answers, counting similarity values of the reference answers and the simulation answers to obtain M similarity values, selecting the similarity value with the largest numerical value from the M similarity values, taking the reference answer corresponding to the similarity value with the largest numerical value as a target answer, and confirming the answer corresponding to the target answer as the verified answer.
Further, in the reliability values of all the reference answers, a reliability value with a maximum value is selected as a maximum reliability value, and the maximum reliability value is compared with a preset standard value to obtain a comparison result, and after the comparison result is obtained, the answer verification method for the crowdsourcing task further includes:
the response object comprises a first object and a second object, the answer of the first object is a first answer, the answer of the second object is a second answer, the first object and the second object are both corresponding to preset weights, and the preset weight of the first object is smaller than the preset weight of the second object.
If the comparison result is that the maximum credibility value is smaller than the preset standard value, acquiring a first preset weight corresponding to the first answer and a second preset weight corresponding to the second answer;
determining the credibility weight of each type of reference answer according to the reference answer types to which the first answer and the second answer belong, the first preset weight and the second preset weight;
determining the weighted credibility value of each type of reference answer according to the credibility weight of each type of reference answer and a preset weight verification method;
and selecting the weighted credibility value with the largest numerical value as a target weighted credibility value, acquiring a reference answer corresponding to the target weighted credibility value as a target answer, and determining a response answer corresponding to the target answer as a response answer passing verification.
Further, the selecting the weighted reliability value with the largest numerical value as a target weighted reliability value, taking a reference answer corresponding to the target weighted reliability value as a target answer, and determining the response answer corresponding to the target answer as the response answer passing the verification, further includes:
acquiring historical response answers of the first object and the second object;
judging the response accuracy rate of the first object by verifying the proportion of the response answers passing the verification in the historical response answers of the first object, and judging the response accuracy rate of the second object by verifying the proportion of the response answers passing the verification in the historical response answers of the second object;
and updating the preset weight of the first object and the preset weight of the second object according to the response accuracy and a preset classification threshold value to obtain the updated preset weight of the first object and the updated preset weight of the second object. .
In order to solve the technical problems, the invention adopts a technical scheme that: provided is an answer verification device for a crowdsourcing task, including:
the initial answer obtaining module is used for obtaining each answer corresponding to the target task from all answer answers obtained by the client as an initial answer, wherein each answer corresponds to one answer object;
the semantic recognition result module is used for performing semantic recognition on each initial answer in a natural language semantic recognition mode to obtain semantic recognition results of N initial answers, wherein N is the number of the response objects, and N is a positive integer;
the reference answer classification module is used for combining the semantic recognition results two by two, taking each combination as a group of results, counting the similarity value between the semantic recognition results in each group of results in a similarity calculation mode, and taking the two semantic recognition results in the group as the same type of reference answers to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold value, wherein M is less than or equal to N, and N is a positive integer;
the credibility value statistic module is used for determining the credibility value of each type of reference answers in a preset consistency check mode;
the credibility value comparison module is used for selecting the credibility value with the largest numerical value from the credibility values of all the reference answers as the maximum credibility value, and comparing the maximum credibility value with a preset standard value to obtain a comparison result;
the simulation answer obtaining module is used for obtaining the target task if the comparison result is that the maximum credibility value is smaller than the preset standard value, inputting the target task into a preset model, and obtaining a simulation answer through the preset model;
and the answer verification module is used for counting the similarity values of the reference answers and the simulation answers for each type of the reference answers to obtain M similarity values, selecting the similarity value with the largest numerical value from the M similarity values, taking the reference answer corresponding to the similarity value with the largest numerical value as a target answer, and confirming the answer corresponding to the target answer as the verified answer.
Further, the semantic recognition result module comprises:
a basic word segmentation obtaining unit, configured to perform word segmentation processing on the initial answer in a preset word segmentation manner to obtain a basic word segmentation included in the initial answer;
the word vector acquisition unit is used for converting the basic word segmentation into word vectors and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector;
and the semantic recognition result unit is used for acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer device is provided that includes, one or more processors; a memory for storing one or more programs to cause the one or more processors to implement an answer verification scheme for the crowdsourcing task as described in any one of the above.
In order to solve the technical problems, the invention adopts a technical scheme that: a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an answer verification scheme for a crowdsourcing task as in any one of the above.
In the above scheme, according to the answer verification method for the crowdsourcing task, the answer of each answer object corresponding to the target task is obtained from all answer answers obtained from the client and is used as an initial answer; performing semantic recognition on each initial answer, combining every two semantic recognition results, taking each combination as a group of results, counting similarity values between the semantic recognition results in each group of results in a similarity calculation mode, and taking two semantic recognition results in the group as a same type of reference answers to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold; determining the credibility value of each type of reference answer through a preset consistency check mode; selecting a confidence value with the largest value from the confidence values of all the reference answers as the largest confidence value, and comparing the largest confidence value with a preset standard value to obtain a comparison result; and if the comparison result is that the maximum credibility value is greater than or equal to the preset standard value, taking the reference answer corresponding to the maximum credibility value as the target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification. The maximum credibility value is determined, the similarity value statistics is carried out, the similarity value with the maximum value is obtained, the response answer passing the verification is finally confirmed, and the answer verification efficiency of the crowdsourcing task can be effectively improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic application environment diagram of an answer verification method for a crowdsourcing task according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of a method for validating answers to crowdsourcing tasks according to an embodiment of the application;
fig. 3 is a flowchart of an implementation of step 2 in an answer verification method for a crowdsourcing task according to an embodiment of the present application;
fig. 4 is a flowchart illustrating an implementation of step S21 in the method for verifying answers to crowdsourcing tasks according to the present application;
fig. 5 is a flowchart of an implementation after step S5 in the method for verifying answers to crowdsourcing tasks according to the embodiment of the present application;
fig. 6 is a flowchart of another implementation after step S5 in the answer verification method for crowdsourcing task according to the embodiment of the present application;
fig. 7 is a flowchart illustrating an implementation of step S57 in the method for verifying answers to crowdsourcing tasks according to the present application;
FIG. 8 is a schematic diagram of an answer validation apparatus for a crowdsourcing task according to an embodiment of the present application;
fig. 9 is a schematic diagram of a computer device provided in an embodiment of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
The present invention will be described in detail below with reference to the accompanying drawings and embodiments.
Referring to fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a web browser application, a search-type application, an instant messaging tool, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the answer verification method for the crowdsourcing task provided in the embodiments of the present application is generally executed by a server, and accordingly, an answer verification device for the crowdsourcing task is generally disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 shows an embodiment of an answer verification method for a crowdsourcing task.
It should be noted that, if the result is substantially the same, the method of the present invention is not limited to the flow sequence shown in fig. 2, and the method includes the following steps:
s1: and obtaining each response answer corresponding to the target task from all response answers obtained by the client as an initial answer, wherein each response answer corresponds to one response object.
Specifically, after a target task is generated or obtained, the server pushes the target task to a plurality of objects, the objects answer the target task to obtain a response answer, the response answer is fed back to the server through the client, and the server receives the response answer sent by each client through a network transmission protocol and uses the response answer as an initial answer.
And at the server, pre-storing the information of each response object, and after the response object performs response feedback of the task to obtain a response answer, establishing a mapping relation between the response answer and the response object. After the initial answers are obtained, each initial corresponding response object is obtained, that is, all lists of objects participating in the target task and feeding back the response answers to the server are obtained.
The responding object in this embodiment may specifically be a preset network model (machine learning model or neural network model), or may also be an individual with autonomous learning ability and discrimination ability, such as a preset computing engine or a big data platform, and is not limited in this respect. For the same crowdsourcing task, different objects give respective answer answers due to different structures and implementation logics.
It should be noted that the server may perform dispatch and push of multiple tasks at the same time, and may receive response answers from different tasks in the same time period, and in order to facilitate subsequent screening of crowdsourcing reference answers, the server sets a task identifier for each task in advance. And selecting an initial answer corresponding to the target task from all answer answers through the target task identifier.
For example, in a specific embodiment, all answer answers obtained from the client include three task identifiers, which are task a, task B, and task C, respectively, and the task identifier of the current crowdsourcing task is task a, so that the answer with the task identifier of task a is selected as the initial answer.
The task identifier may be specifically represented by a character, a number, or a combination of characters and numbers, for example, a task identifier is "TPSB 5201906280236".
The target task in the present embodiment is a statement type of an objective fact, for example, recognition and transcription of characters on a given image, and the like.
In this embodiment, an electronic device (for example, the server shown in fig. 1) on which the method for verifying answers to crowdsourcing tasks operates may be connected in a wired manner or in a wireless manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
S2: and the method is used for performing semantic recognition on each initial answer in a natural language semantic recognition mode to obtain semantic recognition results of N initial answers, wherein N is the number of response objects, and N is a positive integer.
Specifically, semantic recognition is performed on each initial answer through a natural language semantic recognition method to obtain a semantic recognition result of each initial answer.
The Natural language semantic recognition mainly adopts a mode of N L P, N L P (Natural L and Natural language processing) is also called Natural language processing, and since understanding (understating) Natural language requires extensive knowledge about the external world and the ability to use the knowledge, Natural language cognition is also regarded as an artificial intelligence completion (AI-complete) problem, the N L P task mainly refers to some tasks related to semantic understanding or Parsing of Natural language, and common N L P tasks include but are not limited to Speech recognition (Speech recognition), Chinese Automatic segmentation (Chinese word segmentation), Part-of-Speech tagging (Part-of-Speech tagging), text classification (textual classification), syntactic analysis (Parsing), Automatic summarization (Automatic summarization), Question and answer system (query) and Information extraction (Information extraction).
S3: combining the semantic recognition results two by two, taking each combination as a group of results, counting the similarity value between the semantic recognition results in each group of results in a similarity calculation mode, and taking the two semantic recognition results in the group as the same type of reference answers to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold, wherein M is less than or equal to N, and N is a positive integer.
Specifically, the semantic recognition results are combined pairwise, each combination is used as a group of results, similarity calculation is applied to obtain a similarity value between each group of semantic recognition results, and any two semantic recognition results are classified as the preset similarity threshold value if the similarity value is larger than the preset similarity threshold value through the preset similarity threshold value.
The preset similarity threshold is preset by the server and is used for classifying different semantic recognition results respectively, the reasonable preset similarity thresholds are 0.9, 0.8, 0.7, 0.6 and the like, the specific setting of the preset similarity threshold can be set according to actual conditions, and the preset similarity threshold is not limited here, preferably, the preset similarity threshold is 0.8 in the embodiment.
The similarity calculation method includes, but is not limited to: minkowski Distance (Minkowski Distance), Manhattan Distance (Manhattan Distance), Euclidean Distance (Euclidean Distance), Chebyshev Distance (Chebyshev Distance), Hamming Distance (Hamming Distance), mahalanobis Distance (mahalanobis Distance), and the like.
Preferably, the similarity value between the semantic recognition results in each group of results is calculated by adopting Euclidean distance; by adopting Euclidean distance calculation, the similarity value between the semantic recognition results in each group of results can be calculated quickly and efficiently.
S4: and determining the credibility value of each type of reference answer through a preset consistency check mode.
Wherein, the reliability value refers to the reliability of the type of reference answer.
In one embodiment, the confidence value of each type of reference answer is calculated using the following formula:
CR1+CR2+…+CRm=1
P={P1,P2,...,Pm)
Figure BDA0002396843740000111
wherein P ═ { P ═ P1,P2,...,PmIs a reference answer set, PmNumber of answer answers, CR, contained for class m reference answersmThe reliability value corresponding to the mth type of reference answers, m is the number of the types of the reference answers, and n is the number of the initial answers.
S5: and selecting the confidence value with the largest value from the confidence values of all the reference answers as the maximum confidence value, and comparing the maximum confidence value with a preset standard value to obtain a comparison result.
Specifically, the confidence value with the largest value is selected from the confidence values of all types of reference answers and compared with a preset standard value, and whether the reference answers meeting the requirements exist in all the classified reference answers or not is determined according to the comparison result.
The preset standard value is a value preset by the server and used for evaluating whether the reliability value meets the requirement, the reasonable range is (0.5, 1), the specific setting can be set according to the actual situation, and is not limited here, preferably, the preset standard value in this embodiment is 0.6.
And the comparison result comprises that the maximum credibility value is smaller than a preset standard value and the maximum credibility value is larger than or equal to the preset standard value.
S6: and if the comparison result is that the maximum credibility value is greater than or equal to the preset standard value, taking the reference answer corresponding to the maximum credibility value as the target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
Specifically, when the maximum confidence value is greater than or equal to the preset standard value, it indicates that the reference answer corresponding to the maximum confidence value is a reliable answer, and the reference answer is used as a target answer, and the answer corresponding to the target answer is determined as an answer passing verification.
The answer verification result is sent to the terminal devices 101, 102, 103 so that the answering object can know the result of the answer verification.
In this embodiment, the response answer of each response object corresponding to the crowdsourcing task is obtained from all the response answers obtained from the client, and is used as the initial answer; performing semantic recognition on each initial answer, combining every two semantic recognition results, taking each combination as a group of results, counting similarity values between the semantic recognition results in each group of results in a similarity calculation mode, and taking two semantic recognition results in the group as a same type of reference answers to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold; determining the credibility value of each type of reference answer through a preset consistency verification method; then, selecting the maximum credibility value from the credibility values of all types of reference answers to obtain a comparison result; if the comparison result is that the maximum credibility value is greater than or equal to the preset standard value, the reference answer corresponding to the maximum credibility value is used as the target answer, the response answer corresponding to the target answer is confirmed to be that the response answer passing the verification passes the maximum credibility value, similarity value statistics is carried out, the similarity value with the maximum value is obtained, the response answer passing the verification is finally confirmed, and the answer verification efficiency of the crowdsourcing task can be effectively improved.
Referring to fig. 3, fig. 3 shows a specific implementation manner of step S2, in step S2, each initial answer is semantically recognized by means of natural language semantic recognition, so as to obtain semantic recognition results of N initial answers, where N is the number of response objects, and N is a positive integer, and the detailed description is as follows:
s21: and performing word segmentation processing on the initial answer through a preset word segmentation mode to obtain basic words included in the initial answer.
S22: and converting the basic word segmentation into word vectors, and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector.
S23: and acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
In the embodiment, the semantic recognition result of the initial answer is obtained by performing word segmentation processing on the initial answer and clustering word vectors through a clustering algorithm, so that the semantic recognition of the initial answer is more accurate, and the verification accuracy of the answer of the crowdsourcing task is further improved.
Referring to fig. 4, fig. 4 shows a specific implementation manner of step S21, in step S21, the initial answer is segmented by a preset segmentation method to obtain a basic segmentation process included in the initial answer, which is described in detail as follows:
s211: and performing word segmentation analysis on the basic sentence to obtain K word segmentation sequences.
S212: and aiming at each word segmentation sequence, calculating the occurrence probability of each word segmentation sequence according to word sequence data of a preset training corpus to obtain the occurrence probability of K word segmentation sequences.
S213: and selecting the word segmentation sequence corresponding to the occurrence probability reaching a preset probability threshold from the occurrence probabilities of the K word segmentation sequences as a target word segmentation sequence, and taking each word segmentation in the target word segmentation sequence as a basic word segmentation contained in the initial answer.
In this embodiment, the basic sentences are used for performing word segmentation analysis, the occurrence probability of each word segmentation sequence is calculated, and the basic words included in the initial answer can be accurately obtained, so that the initial answer can obtain an accurate semantic recognition result.
Referring to fig. 5, fig. 5 shows an embodiment after step S5, which is described in detail as follows:
s51: and if the comparison result is that the maximum reliability value is smaller than a preset standard value, acquiring the target task, inputting the target task into a preset model, and obtaining a simulation answer through the preset model.
Specifically, each type of target task is preset with a corresponding task model, and the task model has no autonomous learning capability, but can obtain a simulation answer with accuracy meeting the requirement. And when the maximum credibility value is smaller than a preset standard value, acquiring a target task, inputting the target task into a preset model, and obtaining a simulation answer through the preset model.
In one embodiment, a crowdsourcing task is character recognition and transcription in an image, the predetermined model is an OCR matching model, the confidence value is between 0.65 and 0.75, and the predetermined confidence value is 0.55, and in step S50, when the calculated confidence value is lower than 0.55, the predetermined model is used to obtain a simulated answer.
S52: and for each type of reference answers, counting similarity values of the reference answers and the simulation answers to obtain M similarity values, selecting the similarity value with the maximum value from the M similarity values, taking the reference answer corresponding to the similarity value with the maximum value as a target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
Specifically, simulation answers are obtained through a preset model, similarity values of each reference answer and each simulation answer are obtained through a similarity calculation mode, the similarity values are arranged from large to small, the reference answer with the largest similarity value is selected as a target answer, and a response answer corresponding to the target answer is confirmed as a response answer passing verification.
Here, the manner of similarity calculation has been shown in step S3, and is not at a redundant level here.
In one embodiment, euclidean distances such as 0.9, 0.8, 0.5, 0.2, 0.1, etc. between each reference answer and the simulation answer are obtained by using euclidean distances, and since the smaller the euclidean distances are, the similarity between the reference answers and the simulation answers is, in the euclidean distances, the reference answer corresponding to the euclidean distance of 0.1 is selected as the target answer, and the response answer corresponding to the target answer is determined as the response answer passing the verification.
In this embodiment, the accuracy of the answer verification of the crowdsourcing task can be further improved by obtaining the simulation answer through the preset model and determining the response answer passing the verification.
Referring to fig. 6, fig. 6 shows an embodiment after step S6, and details of the implementation process are as follows:
s53: the response object comprises a first object and a second object, the answer of the first object is a first answer, the answer of the second object is a second answer, the first object and the second object are both corresponding to preset weights, and the preset weight of the first object is smaller than the preset weight of the second object.
Specifically, in the server, the response object includes a first object and a second object, where the answer to the first object is a first answer, the answer to the second object is a second answer, the first object and the second object both have corresponding preset weights, and the preset weight of the first object is smaller than the preset weight of the second object.
The preset weight is the answer to the crowdsourcing task in the past according to the response object, and the answer weight set by the server can be used for evaluating the answer credibility of the response object.
It should be understood that, by giving the first object and the second object corresponding preset weights, when the calculated confidence value does not meet the requirement (is smaller than a preset standard value), the answer answers corresponding to the first object and the second object can be weighted according to the preset weights, so that the confidence value distribution of the answer answers is more reasonable, and the accuracy of the confidence value is improved.
S54: and if the comparison result is that the maximum credibility value is smaller than a preset standard value, acquiring a first preset weight corresponding to the first answer and a second preset weight corresponding to the second answer.
Specifically, if the maximum confidence value is smaller than a preset standard value, the answer cannot be verified, a first answer and a second answer of a first answer object and a second answer object need to be obtained, different weights are given to the first answer and the second answer, and the confidence of the answer is determined according to the given weights.
The preset weight is a preset setting of the server, the more reasonable range is (0.1, 1), the specific setting can be set according to the actual situation, and is not limited here, preferably, in this embodiment, the first preset weight is 0.6, and the second preset weight is 0.8.
S55: and determining the credibility weight of each type of reference answer according to the reference answer types to which the first answer and the second answer belong, the first preset weight and the second preset weight.
Specifically, the crowdsourcing task has different answer topic types, and correspondingly, the crowdsourcing task also has a reference answer type to which the answer to be answered belongs. And identifying a first preset weight and a second preset weight of the first answer and the second answer according to the reference answer categories to which the first answer and the second answer belong, and finally determining the credibility weight of each type of reference answer.
The credibility weight is the credibility proportion given to each type of reference answers by the server.
S56: and determining the weighted credibility value of each type of reference answer according to the credibility weight of each type of reference answer and a preset weight verification mode.
And the weighted credibility value is the credibility value corresponding to the answer.
In one embodiment, the weighted confidence value of each type of reference answer is calculated as follows:
suppose the first answer is P1Then the reference answer set is P ═ P1,P2,...,PmCorresponding weighted confidence value is CR ═ CR1+0.5,CR2,...,CRm}。
Wherein, PmThe number of answer answers included in the mth type of reference answers, CR is a weighted credibility value corresponding to the mth type of reference answers, m is the number of types of reference answers, and n is the number of initial answers.
S57: and selecting the weighting credibility value with the largest numerical value as a target weighting credibility value, acquiring a reference answer corresponding to the target weighting credibility value as a target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
Specifically, the maximum weighted confidence value is obtained, that is, the reference answer corresponding to the weighted confidence value is the best answer for the crowdsourcing task, and the best answer is determined as the answer for the crowdsourcing task and is confirmed as the answer for passing the verification.
In this embodiment, the reliability of each type of reference answer is obtained by determining the reliability weight of each type of reference answer and calculating the weighted reliability value of each type of reference answer, so that the accuracy of verification of the answer answers of the crowdsourcing task is improved.
Referring to fig. 7, fig. 7 shows a specific implementation manner of step S57, in step S57, a weighted confidence value with the largest value is selected as a target weighted confidence value, a reference answer corresponding to the target weighted confidence value is obtained as a target answer, and a response answer corresponding to the target answer is determined as a response answer passing verification, which is described in detail as follows:
s571: and acquiring historical response answers of the first object and the second object.
Specifically, the service end divides the object of the crowdsourcing task into a first object and a second object; the first object and the second object refer to different classes of objects, not to one or both of them in particular; by obtaining the historical response answers of the first object and the second object to the crowdsourcing task, the response levels of the first object and the second object to the crowdsourcing task can be obtained.
S572: and judging the response accuracy rate of the first object by verifying the response answer proportion in the historical response answers of the second object, and judging the response accuracy rate of the second object by verifying the response answer proportion in the historical response answers of the second object.
Specifically, the accuracy corresponding to the responses of the first object and the second object is obtained by judging the historical response answers of the first object and the second object and verifying the passing response answer proportion, and the response capability of the first object and the second object to the crowdsourcing task is further obtained.
S573: and updating the preset weight of the first object and the preset weight of the second object according to the response accuracy and the preset classification threshold value to obtain the updated preset weight of the first object and the updated preset weight of the second object.
Specifically, response accuracy rates of the first object and the second object are obtained, and if part of the first object has higher accuracy rate and the accuracy rate exceeds a preset classification threshold, the part of the first object is adjusted to be the second object; if the accuracy rate of the part of the second objects is lower than the preset classification threshold, the part of the second objects is adjusted to be the first object.
The preset classification threshold is preset at the server, the reasonable range is (0.6, 0.9), the specific setting can be set according to the actual situation, and the setting is not limited here, preferably, the preset classification threshold is 0.6 in this embodiment.
In this embodiment, by judging the response accuracy of the first object and the second object and updating the preset weight of the first object and the preset weight of the second object, the preset weight of the first object and the preset weight of the second object are updated, so that the response answers of the response objects to the crowdsourcing task are more emphasized, and the accuracy and the efficiency of the crowdsourcing task are further improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
Referring to fig. 8, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an answer verification apparatus for a crowdsourcing task, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 8, the answer verification apparatus for a crowdsourcing task of the present embodiment includes: an initial answer obtaining module 81, a semantic recognition result module 82, a reference answer classifying module 83, a credibility value statistic module 84, a credibility value comparing module 85 and an answer verifying module 86. Wherein:
an initial answer obtaining module 81, configured to obtain each answer corresponding to the target task from all answer answers obtained by the client as an initial answer, where each answer corresponds to one answer object;
a semantic recognition result module 82, configured to perform semantic recognition on each initial answer in a natural language semantic recognition manner to obtain semantic recognition results of N initial answers, where N is the number of response objects, and N is a positive integer;
the reference answer classification module 83 is configured to combine the semantic recognition results two by two, use each combination as a group of results, calculate a similarity value between the semantic recognition results in each group of results in a similarity calculation manner, and use two semantic recognition results in the group as the same type of reference answers if the obtained similarity value is greater than a preset similarity threshold value to obtain M types of reference answers, where M is not greater than N and N is a positive integer;
the credibility value counting module 84 is configured to determine the credibility value of each type of reference answer through a preset consistency check mode;
the reliability value comparison module 85 is configured to select a reliability value with a largest value from the reliability values of all the reference answers as a maximum reliability value, and compare the maximum reliability value with a preset standard value to obtain a comparison result;
and the answer verification module 86 is configured to, if the comparison result is that the maximum confidence value is greater than or equal to the preset standard value, take the reference answer corresponding to the maximum confidence value as the target answer, and confirm the answer corresponding to the target answer as the verified answer.
Further, the semantic recognition result module 82 includes:
and the basic word segmentation obtaining unit is used for carrying out word segmentation processing on the initial answer through a preset word segmentation mode to obtain the basic word segmentation contained in the initial answer.
And the word vector acquisition unit is used for converting the basic word segmentation into word vectors and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector.
And the semantic recognition result unit is used for acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
Further, the basic participle obtaining unit includes:
and the word segmentation sequence acquisition subunit is used for carrying out word segmentation analysis on the basic sentence to obtain K word segmentation sequences.
And the word segmentation sequence occurrence probability subunit is used for calculating the occurrence probability of each word segmentation sequence according to word sequence data of a preset training corpus and aiming at each word segmentation sequence to obtain the occurrence probability of K word segmentation sequences.
And the target word segmentation sequence determining subunit is used for selecting the word segmentation sequence corresponding to the occurrence probability reaching the preset probability threshold from the occurrence probabilities of the K word segmentation sequences as a target word segmentation sequence, and taking each word segmentation in the target word segmentation sequence as a basic word segmentation contained in the initial answer.
Further, the answer verification device for the crowdsourcing task further comprises:
the simulation answer obtaining module is used for obtaining a target task if the comparison result is that the maximum credibility value is smaller than a preset standard value, inputting the target task into a preset model, and obtaining a simulation answer through the preset model;
and the target answer selecting module is used for counting the similarity values of the reference answers and the simulated answers aiming at each type of reference answers to obtain M similarity values, selecting the similarity value with the maximum value from the M similarity values, taking the reference answer corresponding to the similarity value with the maximum value as the target answer, and confirming the answer corresponding to the target answer as the answer passing the verification.
Further, the answer verification device for the crowdsourcing task further comprises:
the preset weight setting module is used for the response objects to comprise a first object and a second object, the response answer of the first object is a first response answer, the response answer of the second object is a second response answer, the first object and the second object are both corresponding to preset weights, and the preset weight of the first object is smaller than the preset weight of the second object;
the preset weight obtaining module is used for obtaining a first preset weight corresponding to the first answer and a second preset weight corresponding to the second answer if the comparison result shows that the maximum credibility value is smaller than a preset standard value;
the credibility weight determining module is used for determining the credibility weight of each type of reference answer according to the reference answer types to which the first answer and the second answer belong, the first preset weight and the second preset weight;
the weighted credibility value determining module is used for determining the weighted credibility value of each type of reference answers according to the credibility weight of each type of reference answers and a preset weight verification mode;
and the weighted credibility value selecting module is used for selecting the weighted credibility value with the largest numerical value as a target weighted credibility value, acquiring a reference answer corresponding to the target weighted credibility value as a target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
Further, the weighted credibility value selection module comprises:
a historical answer obtaining unit for obtaining historical answer answers of the first object and the second object;
the response accuracy rate verification unit is used for judging the response accuracy rate of the first object by verifying the passing response answer proportion in the historical response answers of the first object, and judging the response accuracy rate of the second object by verifying the passing response answer proportion in the historical response answers of the second object;
and the updated object unit is used for updating the preset weight of the first object and the preset weight of the second object according to the response accuracy and the preset classification threshold value to obtain the updated preset weight of the first object and the updated preset weight of the second object.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 9, fig. 9 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 9 includes a memory 91, a processor 92, and a network interface 93 communicatively connected to each other via a system bus. It is noted that only the computer device 9 having three components memory 91, processor 92, network interface 93 is shown, but it is understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 91 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 91 may be an internal storage unit of the computer device 9, such as a hard disk or a memory of the computer device 9. In other embodiments, the memory 91 may also be an external storage device of the computer device 9, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device 9. Of course, the memory 91 may also comprise both an internal storage unit of the computer device 9 and an external storage device thereof. In this embodiment, the memory 91 is generally used for storing an operating system installed in the computer device 9 and various types of application software, such as program codes of the X method. Further, the memory 91 can also be used to temporarily store various types of data that have been output or are to be output.
Processor 92 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 92 is typically used to control the overall operation of the computer device 9. In this embodiment, the processor 92 is configured to run program codes stored in the memory 91 or process data, for example, to run a crowd-sourced task spot check method.
The network interface 93 may include a wireless network interface or a wired network interface, and the network interface 93 is generally used to establish a communication connection between the computer device 9 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing a spot check program, where the spot check program is executable by at least one processor to cause the at least one processor to perform the steps of the spot check method for crowdsourcing task as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method of the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An answer verification method for a crowdsourcing task, comprising:
acquiring each response answer corresponding to the target task from all response answers acquired by the client as an initial answer, wherein each response answer corresponds to one response object;
performing semantic recognition on each initial answer through a natural language semantic recognition mode to obtain semantic recognition results of N initial answers, wherein N is the number of the response objects, and N is a positive integer;
combining the semantic recognition results two by two, taking each combination as a group of results, counting similarity values between the semantic recognition results in each group of results in a similarity calculation mode, and taking the two semantic recognition results in the group as reference answers of the same class to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold value, wherein M is less than or equal to N, and N is a positive integer;
determining the credibility value of each type of reference answer through a preset consistency check mode;
selecting the credibility value with the largest value from all the credibility values of the reference answers as the maximum credibility value, and comparing the maximum credibility value with a preset standard value to obtain a comparison result;
and if the comparison result is that the maximum credibility value is greater than or equal to the preset standard value, taking the reference answer corresponding to the maximum credibility value as a target answer, and confirming the response answer corresponding to the target answer as the response answer passing the verification.
2. The method for verifying answers to crowdsourcing tasks according to claim 1, wherein the semantic recognition of each of the initial answers by means of natural language semantic recognition to obtain semantic recognition results of N initial answers comprises:
performing word segmentation processing on the initial answer through a preset word segmentation mode to obtain basic words included in the initial answer;
converting the basic word segmentation into word vectors, and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector;
and acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
3. The method for verifying answers to crowdsourcing tasks according to claim 2, wherein the obtaining of the basic participles included in the initial answer by performing participle processing on the initial answer in a preset participle manner comprises:
performing word segmentation analysis on the basic sentence to obtain K word segmentation sequences;
aiming at each word segmentation sequence, calculating the occurrence probability of each word segmentation sequence according to word sequence data of the preset training corpus to obtain the occurrence probability of K word segmentation sequences;
and selecting the word segmentation sequence corresponding to the occurrence probability reaching a preset probability threshold from the occurrence probabilities of the K word segmentation sequences as a target word segmentation sequence, and taking each word segmentation in the target word segmentation sequence as a basic word segmentation contained in the initial answer.
4. The method of claim 1, wherein the method of verifying answers to the crowdsourcing task further comprises, after selecting a confidence value with a largest value from the confidence values of all the reference answers as a maximum confidence value, and comparing the maximum confidence value with a preset standard value to obtain a comparison result:
if the comparison result is that the maximum credibility value is smaller than the preset standard value, acquiring the target task, inputting the target task into a preset model, and obtaining a simulation answer through the preset model;
and for each type of the reference answers, counting similarity values of the reference answers and the simulation answers to obtain M similarity values, selecting the similarity value with the largest numerical value from the M similarity values, taking the reference answer corresponding to the similarity value with the largest numerical value as a target answer, and confirming the answer corresponding to the target answer as the verified answer.
5. The method for validating answers to crowdsourcing tasks, according to claim 1,
selecting the reliability value with the largest value from the reliability values of all the reference answers as the maximum reliability value, comparing the maximum reliability value with a preset standard value, and obtaining a comparison result, wherein the answer verification method for the crowdsourcing task further comprises the following steps of:
the response objects comprise a first object and a second object, the answer of the first object is a first answer, the answer of the second object is a second answer, the first object and the second object are both corresponding to preset weights, and the preset weight of the first object is smaller than the preset weight of the second object;
if the comparison result is that the maximum credibility value is smaller than the preset standard value, acquiring a first preset weight corresponding to the first answer and a second preset weight corresponding to the second answer;
determining the credibility weight of each type of reference answer according to the reference answer types to which the first answer and the second answer belong, the first preset weight and the second preset weight;
determining the weighted credibility value of each type of reference answer according to the credibility weight of each type of reference answer and a preset weight verification mode;
and selecting the weighted credibility value with the largest numerical value as a target weighted credibility value, acquiring a reference answer corresponding to the target weighted credibility value as a target answer, and determining a response answer corresponding to the target answer as a response answer passing verification.
6. The method for verifying answers to a crowdsourcing task according to claim 5, wherein the selecting the weighted confidence value with the largest value as a target weighted confidence value, and using a reference answer corresponding to the target weighted confidence value as a target answer, and after confirming a response answer corresponding to the target answer as a response answer that is verified, the method for verifying answers to a crowdsourcing task further comprises:
acquiring historical response answers of the first object and the second object;
judging the response accuracy rate of the first object by verifying the proportion of the response answers passing the verification in the historical response answers of the first object, and judging the response accuracy rate of the second object by verifying the proportion of the response answers passing the verification in the historical response answers of the second object;
and updating the preset weight of the first object and the preset weight of the second object according to the response accuracy and a preset classification threshold value to obtain the updated preset weight of the first object and the updated preset weight of the second object.
7. An answer verification apparatus for a crowdsourcing task, comprising:
the initial answer obtaining module is used for obtaining each answer corresponding to the target task from all answer answers obtained by the client as an initial answer, wherein each answer corresponds to one answer object;
the semantic recognition result module is used for performing semantic recognition on each initial answer in a natural language semantic recognition mode to obtain semantic recognition results of N initial answers, wherein N is the number of the response objects, and N is a positive integer;
the reference answer classification module is used for combining the semantic recognition results two by two, taking each combination as a group of results, counting the similarity value between the semantic recognition results in each group of results in a similarity calculation mode, and taking the two semantic recognition results in the group as the same type of reference answers to obtain M types of reference answers if the obtained similarity value is greater than a preset similarity threshold value, wherein M is less than or equal to N, and N is a positive integer;
the credibility value statistic module is used for determining the credibility value of each type of reference answers in a preset consistency check mode;
the credibility value comparison module is used for selecting the credibility value with the largest numerical value from the credibility values of all the reference answers as the maximum credibility value, and comparing the maximum credibility value with a preset standard value to obtain a comparison result;
and the answer verification module is used for taking the reference answer corresponding to the maximum credibility value as a target answer and confirming the answer corresponding to the target answer as the verified answer if the comparison result shows that the maximum credibility value is greater than or equal to the preset standard value.
8. The apparatus for validating answers to crowdsourcing tasks as claimed in claim 7, wherein the semantic recognition result module comprises:
a basic word segmentation obtaining unit, configured to perform word segmentation processing on the initial answer in a preset word segmentation manner to obtain a basic word segmentation included in the initial answer;
the word vector acquisition unit is used for converting the basic word segmentation into word vectors and clustering the word vectors through a clustering algorithm to obtain a clustering center corresponding to each word vector;
and the semantic recognition result unit is used for acquiring preset semantics corresponding to the clustering center corresponding to each word vector as a semantic recognition result of the initial answer.
9. A computer device comprising a memory having stored therein a computer program and a processor which when executed implements the steps of a method of answer verification for a crowdsourcing task as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of the method for answer verification for a crowdsourcing task as claimed in any one of claims 1 to 6.
CN202010135251.6A 2020-03-02 2020-03-02 Crowdsourcing task answer verification method and device, computer equipment and storage medium Pending CN111460811A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010135251.6A CN111460811A (en) 2020-03-02 2020-03-02 Crowdsourcing task answer verification method and device, computer equipment and storage medium
PCT/CN2020/117671 WO2021174814A1 (en) 2020-03-02 2020-09-25 Answer verification method and apparatus for crowdsourcing task, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010135251.6A CN111460811A (en) 2020-03-02 2020-03-02 Crowdsourcing task answer verification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111460811A true CN111460811A (en) 2020-07-28

Family

ID=71684147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010135251.6A Pending CN111460811A (en) 2020-03-02 2020-03-02 Crowdsourcing task answer verification method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111460811A (en)
WO (1) WO2021174814A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021174814A1 (en) * 2020-03-02 2021-09-10 平安科技(深圳)有限公司 Answer verification method and apparatus for crowdsourcing task, computer device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114153961B (en) * 2022-02-07 2022-05-06 杭州远传新业科技有限公司 Knowledge graph-based question and answer method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10002177B1 (en) * 2013-09-16 2018-06-19 Amazon Technologies, Inc. Crowdsourced analysis of decontextualized data
CN105117398B (en) * 2015-06-25 2018-10-26 扬州大学 A kind of software development problem auto-answer method based on crowdsourcing
CN109582581B (en) * 2018-11-30 2023-08-25 平安科技(深圳)有限公司 Result determining method based on crowdsourcing task and related equipment
CN110363194B (en) * 2019-06-17 2023-05-02 深圳壹账通智能科技有限公司 NLP-based intelligent examination paper reading method, device, equipment and storage medium
CN111460811A (en) * 2020-03-02 2020-07-28 平安科技(深圳)有限公司 Crowdsourcing task answer verification method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021174814A1 (en) * 2020-03-02 2021-09-10 平安科技(深圳)有限公司 Answer verification method and apparatus for crowdsourcing task, computer device, and storage medium

Also Published As

Publication number Publication date
WO2021174814A1 (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN110287479B (en) Named entity recognition method, electronic device and storage medium
CN112632385A (en) Course recommendation method and device, computer equipment and medium
US20200004815A1 (en) Text entity detection and recognition from images
CN112989035B (en) Method, device and storage medium for identifying user intention based on text classification
EP3923159A1 (en) Method, apparatus, device and storage medium for matching semantics
CN108038208B (en) Training method and device of context information recognition model and storage medium
CN114780727A (en) Text classification method and device based on reinforcement learning, computer equipment and medium
US20210390370A1 (en) Data processing method and apparatus, storage medium and electronic device
JP2022512065A (en) Image classification model training method, image processing method and equipment
CN112036168B (en) Event main body recognition model optimization method, device, equipment and readable storage medium
CN111831826B (en) Training method, classification method and device of cross-domain text classification model
CN112686022A (en) Method and device for detecting illegal corpus, computer equipment and storage medium
CN115328756A (en) Test case generation method, device and equipment
WO2022174496A1 (en) Data annotation method and apparatus based on generative model, and device and storage medium
CN107291774B (en) Error sample identification method and device
CN112084752A (en) Statement marking method, device, equipment and storage medium based on natural language
WO2021174814A1 (en) Answer verification method and apparatus for crowdsourcing task, computer device, and storage medium
CN110781673B (en) Document acceptance method and device, computer equipment and storage medium
CN115687934A (en) Intention recognition method and device, computer equipment and storage medium
CN115730597A (en) Multi-level semantic intention recognition method and related equipment thereof
CN114817478A (en) Text-based question and answer method and device, computer equipment and storage medium
CN113220828B (en) Method, device, computer equipment and storage medium for processing intention recognition model
CN112579781B (en) Text classification method, device, electronic equipment and medium
US11625630B2 (en) Identifying intent in dialog data through variant assessment
CN116644183B (en) Text classification method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40031290

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination