CN108874655B - Method and device for processing crowdsourcing test data - Google Patents

Method and device for processing crowdsourcing test data Download PDF

Info

Publication number
CN108874655B
CN108874655B CN201710340474.4A CN201710340474A CN108874655B CN 108874655 B CN108874655 B CN 108874655B CN 201710340474 A CN201710340474 A CN 201710340474A CN 108874655 B CN108874655 B CN 108874655B
Authority
CN
China
Prior art keywords
tester
test
resource
testers
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710340474.4A
Other languages
Chinese (zh)
Other versions
CN108874655A (en
Inventor
谢淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710340474.4A priority Critical patent/CN108874655B/en
Publication of CN108874655A publication Critical patent/CN108874655A/en
Application granted granted Critical
Publication of CN108874655B publication Critical patent/CN108874655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security

Abstract

A method and device of processing crowdsourced test data, the method comprising: acquiring task description information of an application to be tested, wherein the task description information is used for indicating crowdsourcing test on the application to be tested; selecting a target tester set for testing the application to be tested from a tester set to be selected based on a tester model set and the task description information; and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set according to the crowdsourcing test strategy. By adopting the scheme, the efficiency of selecting testers in the target tester set can be improved, potential testers with good test resources and good test capability can be discovered, and the potential testers can provide better application test reports for application package senders so as to carry out bug repair on applications by the application package senders.

Description

Method and device for processing crowdsourcing test data
Technical Field
The present application relates to the field of big data processing technologies, and in particular, to a method and an apparatus for processing crowdsourcing test data.
Background
Due to different attributes (such as geographic location, network environment, model and system version) of the terminal device, problems such as downtime and slow response may also occur when the application runs on the terminal device with different attributes. Therefore, before issuing the application, the application publisher needs to perform a sufficient vulnerability test on the application and then issues the application after repairing the vulnerability. However, the application publisher needs more testing resources to test the applications to be released, which results in lower testing efficiency and higher testing cost.
At present, application subcontractors mainly utilize testers with different types of test resources to carry out crowdsourcing test to meet the requirements of the testers on the coverage of test environment and the test quality. Specifically, crowdsourcing testing refers to: a large number of users with different testing resources and testing capabilities in the Internet register account numbers in a public testing platform, then use the testing resources of the users to independently complete the application testing tasks on the line, and after the testing is completed, submit testing reports to an application publisher. The test reports have certain value and can truly and comprehensively reflect most of the loopholes of the application. And then, the application subcontractor analyzes and repairs the obtained test report.
However, since the users participating in the crowdsourcing test are independent of each other, the application publisher is also unable to directionally identify a set of potential users with comprehensive testing resources and testing capabilities, and then directionally push the testing application requirements to those users. It can be seen that the application subcontractor can only passively perform vulnerability analysis and repair according to the test reports fed back by the users, and a large number of users with the same or overlapped test resources exist in the users, and some users are redundant or have low reference value, so that the complexity of data processing is increased, and the whole crowdsourcing test period is long. In addition, there is no connection between these users, and a comprehensive and valuable test report cannot necessarily be obtained quickly. It can be seen that the current crowdsourcing tests are less efficient.
Disclosure of Invention
The application provides a method and a device for processing crowdsourcing test data, which can solve the problem of low test efficiency of crowdsourcing test in the prior art.
A first aspect of the present application provides a method of processing crowdsourcing test data, the method being usable for big data processing, for example for crowdsourcing testing, the method comprising:
task description information of an application to be tested in a task flow can be pulled regularly, and the task description information is used for indicating crowdsourcing test of the application to be tested.
Then, based on the set of tester models and the task description information, selecting a target tester set for testing the application to be tested from the tester set to be selected, wherein the target tester set is directionally selected based on the set of tester models and the task description information and can cover the testing requirement of an application publisher.
After the target tester set is obtained, a crowdsourcing test strategy can be generated according to the target tester set, and the application to be tested is issued to each tester in the target tester set according to the crowdsourcing test strategy. The crowdsourcing test strategy is a strategy for selecting a tester set which can meet the constraint condition of testing resource coverage and meet the requirement of maximizing crowdsourcing test quality and carrying out crowdsourcing test.
Compared with the existing mechanism, the method and the device have the advantages that based on the task description information and the existing set of tester models, the target tester set of the to-be-tested application is selected and tested from the to-be-selected tester set, then the to-be-tested application is issued to each tester in the target tester set according to the crowdsourcing test strategy generated by the target tester set, the efficiency of selecting testers in the target tester set can be improved, potential testers with good test resources and good test capability can be discovered, and the potential testers can provide better application test reports for application packet developers to supply the application packet developers to carry out bug repair on the application.
In some possible designs, the task description information includes a tester expected value and a test index, where the tester expected value is a total number of testers who expect to test the application to be tested. The test indicator includes at least one of a test quality and a coverage expectation of a test resource coverage.
The test quality refers to the evaluation of the application package sender on the test quality of the test application of the tester, and may be measured by a quality score, or identified by a quality grade, or measured in other manners, and the specific measurement manner is not limited in the present application.
The test resource coverage is to measure how many test resources in the test resource expectation set can be covered by the test resources owned by the selected target tester set, and may be considered as a ratio of the test resource set corresponding to the currently selected tester set to the test resource expectation set.
Optionally, the task description information may further include indication information of a desired set of test resources, test resource requirement information, and test capability requirement information.
In some possible designs, the set of tester models includes at least one tester model of a tester, the tester model includes a subset of testing resources of the tester, a testing quality of at least one application tested by the tester using at least one testing resource, and a weighted testing quality of at least one application tested by the tester, for a tester, a weighted testing quality is a weighted average of at least one testing quality obtained by each application tested by the tester, which can be used to measure the overall testing quality of the tester.
In some possible designs, the selecting a target tester set for testing the application to be tested from the candidate tester sets based on the set of tester models and the task description information includes:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set. The first ratio is used to measure whether the currently selected tester set covers all the test resources in the expected set of test resources. When the first ratio reaches the expected coverage value given in the task description information, it indicates that the currently selected tester set covers all or most of the test resources in the expected test resource set, and the iteration may be ended.
In the present application, the target tester set may be selected by a scoring strategy and a pigeon nest strategy, and in the selecting, the target tester set may be mined by the scoring strategy or the pigeon nest strategy alone, or by both strategies, which is not limited in the present application. The following are described separately:
selecting the target tester set based on a grading strategy
The selecting the target tester set from the tester candidate set based on the tester model set, the tester expected value and the coverage expected value, the testing resource subset of each tester in the tester candidate set, the intersection between the testing resource subsets of each tester in the tester candidate set, and the constraint condition of the testing resource coverage, includes:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
and calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents weighted values of the coverage of the test resources and the test quality, and the target tester refers to the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest.
And after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation.
And when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set. Therefore, based on the scoring strategy, an effective target tester set is obtained on the premise of ensuring the operation efficiency, so that the influence of the testers to be selected on the selected set in multiple dimensions can be more fully analyzed when the package sending strategy is mined.
Optionally, the first scoring formula is as follows:
Figure BDA0001295109750000031
wherein alpha is the weight of coverage of the test resource, (1-alpha) is the weight of the test quality, and alpha belongs to [0,1 ]],C0Refers to a test resource set to be covered in the test resource expectation set, Wo refers to a candidate tester set, W' refers to a candidate tester in the candidate tester set, W1Refers to the set of testers to be selected,
Figure BDA0001295109750000032
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure BDA0001295109750000033
means W1Candidate tester W' in (1) is the maximum test resource increment, σ ({ ω }, W, provided by Co0)+Refers to the test mass increment provided by tester w for Wo; sigma ({ omega' }, W0) Refers to the test mass increment, max, provided by the candidate tester w' to Woω′∈W1σ({ω′},W0)+Means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo.
Therefore, by introducing the first scoring formula, the benefits of each candidate tester on the candidate tester set after being selected can be measured from multiple dimensions, so that testers which can bring better test effects on crowdsourcing tests can be identified. The first scoring formula of the application can be added with scoring items or formula variants and the like so as to achieve the purpose of expanding to various scoring dimensions, and the specific expansion and the set scoring dimensions are not limited in the application.
Second, mining target tester set based on pigeon nest strategy miner
The selecting a target tester set for testing the application to be tested from the tester set to be selected based on the tester model set and the task description information comprises:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition includes that the first ratio is close to or not less than the expected coverage value.
The first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
In some embodiments, the selecting the target set of testers from the set of testers to be selected based on the set of tester models, the expected value of testers and the expected value of coverage, a subset of test resources of each tester in the set of testers to be selected, and a pigeon nest policy comprises:
and selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is the candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation.
And adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set.
And when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
It can be seen that, based on the pigeon nest strategy, the testers with the maximum quality can be selected each time from the candidate testers that satisfy the pigeon nest principle. When the crowdsourcing test strategy is mined, the influence of each candidate tester in the to-be-selected tester set on the selected candidate tester set in the test resource coverage can be more fully analyzed, and finally, the tester set which meets the crowdsourcing test task can be mined.
The target tester set obtained based on the above scoring strategy or this pigeon nest strategy can satisfy expectations, that is, the intersection of the test resource set corresponding to the target tester set and the test resource expected set is the test resource expected set.
In some possible designs, before selecting a target tester set for testing the application to be tested from the candidate tester set based on the set of tester models and the task description information, modeling testers in the target tester set based on a historical data set is further required, and the specific process is as follows:
and acquiring a historical data set, wherein the historical data set comprises the testing resources of the testers in the to-be-selected tester set and the testing capability information of the testers.
Modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection.
Therefore, the time for selecting the target tester set can be effectively shortened based on the tester model, and the operation efficiency is improved.
Optionally, the historical data set further includes a test report set fed back by the tester, and the modeling of the tester in the candidate tester set based on the test resource of the tester in the candidate tester set and the test capability information of the tester in the candidate tester set is performed to obtain the set of tester models, including:
and determining the set of testers to be selected from the test report set, and then calculating a first test resource set and a first test quality set according to the set of testers to be selected.
The first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the candidate tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the candidate tester set.
Finally, a set of the tester models is generated according to the first set of test resources and the first set of test qualities. Because the modeling mechanism is based on the testing capability and testing resources of each tester in the mass testing platform to perform modeling on the coarse granularity and the fine granularity, the finally obtained tester model has the reference value of crowdsourcing testing and can effectively and truly predict the testing effect of each tester.
Optionally, the number of the historical applications is at least one, and the application to be tested is the same as or different from the historical applications. When the application to be tested is the same as the historical application, the version updating of the historical application can be tested, and the historical application can also be retested; and when the application to be tested is different from the historical application, the application to be tested is tested aiming at the new application.
In some possible designs, after modeling the testers in the candidate tester set based on the test resources and the test capability information to obtain the tester model set, the tester model may be further updated to further improve the reference value and accuracy of the tester model, and the updating is mainly performed by one of:
and obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report.
Or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
A second aspect of the present application provides an apparatus for processing crowdsourced test data, having functionality to implement a method corresponding to the data processing provided by the first aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In one possible design, the means for processing crowdsourced test data comprises:
the system comprises a receiving and sending module, a task processing module and a task processing module, wherein the receiving and sending module is used for obtaining task description information of an application to be tested, and the task description information is used for indicating crowdsourcing test on the application to be tested;
the processing module is used for selecting a target tester set for testing the application to be tested from a tester set to be selected based on the set of tester models and the task description information acquired by the transceiving module;
and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set by a receiving and sending module according to the crowdsourcing test strategy.
In some possible designs, the task description information includes a tester expected value and a test index, the tester expected value refers to a total number of testers for testing the application to be tested, and the test index includes at least one of a test quality and a coverage expected value of a test resource coverage;
the crowdsourcing test strategy is a strategy for selecting a tester set which can meet the constraint condition of testing resource coverage and meet the requirement of maximizing crowdsourcing test quality and performing crowdsourcing test.
Optionally, the task description information further includes indication information of a desired set of test resources.
In some possible designs, the set of tester models includes at least one tester model of a tester, the tester model including a subset of testing resources of the tester, a testing quality of the tester testing at least one application using the at least one testing resource, and a weighted testing quality of the tester testing at least one application.
In some possible designs, the processing module is specifically configured to:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
In some possible designs, the processing module is specifically configured to:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the coverage of the test resources and the test quality, and the target tester is the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest;
after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation;
and when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
In some possible designs, the first scoring formula is as follows:
Figure BDA0001295109750000061
wherein alpha is the weight of coverage of the test resource, (1-alpha) is the weight of the test quality, and alpha belongs to [0,1 ]],C0Refers to a test resource set to be covered in the test resource expectation set, Wo refers to a candidate tester set, W' refers to a candidate tester in the candidate tester set, W1Refers to the set of testers to be selected,
Figure BDA0001295109750000062
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure BDA0001295109750000063
means W1Candidate tester W' in (1) is the maximum test resource increment, σ ({ ω }, W, provided by Co0 )+Refers to the test mass increment provided by tester w for Wo; sigma ({ omega' }, W0) Refers to the test quality increment provided by the candidate tester w' for Wo, mxaω′∈W1σ({ω′},W0)+Means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo.
In some possible designs, the processing module is specifically configured to:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
In some possible designs, the processing module is specifically configured to:
selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is a candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation;
adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set;
and when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
In some possible designs, the intersection of the test resource set and the expected set of test resources corresponding to the target tester set is the expected set of test resources.
In some possible designs, the processing module, before selecting a target tester set for testing the application to be tested from the candidate tester sets based on the set of tester models and the task description information, is further configured to:
acquiring a historical data set, wherein the historical data set comprises test resources of testers in a to-be-selected tester set and test capability information of the testers;
modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection.
In some possible designs, the historical data set further includes a test report set fed back by the tester, and the processing module is specifically configured to:
determining the set of testers to be selected from the test report set;
calculating a first test resource set and a first test quality set according to the set of testers to be selected;
the first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the to-be-selected tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the to-be-selected tester set;
generating the set of tester models from the first set of test resources and the first set of test qualities.
Optionally, the number of the historical applications is at least one, and the application to be tested is the same as or different from the historical applications.
In some possible designs, after modeling the testers in the candidate tester set based on the test resources and the test capability information to obtain the tester model set, the processing module further performs at least one of the following steps:
obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report;
or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
Yet another aspect of the present application provides an apparatus for processing crowdsourced test data, comprising at least one connected processor, memory and transceiver, wherein the memory is configured to store program code and the processor is configured to call the program code in the memory to perform the method of the above aspects.
Yet another aspect of the present application provides a computer storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of the above-described aspects.
Yet another aspect of the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the above-described aspects.
Compared with the prior art, in the scheme provided by the application, after the task description information of the application to be tested is obtained, the target tester set for testing the application to be tested is selected from the candidate tester set based on the existing tester model set and the task description information, and then the application to be tested is issued to each tester in the target tester set according to the crowdsourcing test strategy generated by the target tester set, so that the testers perform crowdsourcing test on the application to be tested, the efficiency of selecting the testers in the target tester set can be improved to a certain extent, potential testers with good test resources and good test capability can be discovered, and the potential testers can provide better application test reports for application contractors to perform bug repair on the application.
Drawings
FIG. 1 is a schematic diagram of a network topology of an apparatus for processing crowdsourced test data according to an embodiment of the invention;
FIG. 2 is a diagram illustrating the interaction of the system components in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for processing crowdsourced test data according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating an embodiment of obtaining an expected set of test resources;
FIG. 5 is a schematic flow chart illustrating the process of creating a tester model according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart illustrating a process of obtaining a target tester set based on a scoring policy according to an embodiment of the present invention;
fig. 7 is a schematic flow chart illustrating the process of obtaining a target tester set based on the pigeon nest policy according to the embodiment of the present invention;
FIG. 8 is a schematic diagram of another network topology of an apparatus for processing crowdsourced test data according to an embodiment of the invention;
FIG. 9 is a block diagram of an apparatus for processing crowdsourced test data according to an embodiment of the invention;
FIG. 10 is a block diagram of an entity apparatus for performing a method of processing crowdsourced test data according to an embodiment of the invention;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise," "include," and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, the division of modules presented herein is merely a logical division that may be implemented in a practical application in a further manner, such that a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not implemented, and such that couplings or direct couplings or communicative coupling between each other as shown or discussed may be through some interfaces, indirect couplings or communicative coupling between modules may be electrical or other similar forms, this application is not intended to be limiting. The modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the present disclosure.
The application provides a method and a device for processing crowdsourcing test data, which can be used for big data processing. The details will be described below. FIG. 1 is a schematic diagram of a network topology of an apparatus for processing crowdsourced test data, which may include a plurality of system components, each of which may run on one or more servers to collectively implement the functions of crowdsourced test platform software; these system components deployed on one or more servers may also be considered a crowdsourced test platform software. The crowdsourcing test platform software can provide test service for testers participating in crowdsourcing test, collect test reports fed back by the testers participating in crowdsourcing test, and analyze the collected test reports. In fig. 1, the apparatus for processing crowdsourced test data mainly comprises the following system components: the system comprises a task package sending interface, a strategy calculation server, a modeling server, a test result server, a package sending server and a database.
The system consists of a task package sending interface, a strategy calculation server, a modeling server, a test result server, a package sending server and a database.
In the policy computation server, a test task requirement extractor and a crowdsourcing policy miner can be deployed centrally.
The test task requirement extractor is used for: and performing requirement extraction on the input task description information of the application to be tested in a task flow mode to obtain an application testing task requirement model example with a uniform format. The task flow comprises task description information of the application to be tested, and the task description information can be embodied in a five-tuple < M, D, Rq, forwards, Time >. Wherein M is the uploaded application to be tested, D is a user manual expressed by natural language, and the operation and use method of the application is described. Rq is a test requirement list, and may include information about the resources of the test environment, such as network location, operating system, network type, and system version. Rewards is a task completion reward, which includes a reward condition and a corresponding set of specific reward numbers, and Time represents the crowdsourced execution Time for which the task continues. Table 1 below shows an example of task description information for an application to be tested.
Figure BDA0001295109750000091
TABLE 1
The crowdsourcing policy miner is to: according to a tester resource and capability model database and a mobile application test task requirement model example, a potential tester set is mined from all available numerous testers, the mutual influence relationship of the potential tester set is fully considered, the potential tester set is used as a test whole, the test resources of the potential tester set can be used to meet the expected test resource coverage degree due to the fact that the test resources of the potential tester set are enough, the expected test quality is finally maximized, and the strategy of conducting the crowdsourcing test by the potential tester set can be called a crowdsourcing test strategy. In the present application, the crowdsourcing policy miners are mainly divided into two types: a score-based strategy miner and a pigeon nest principle-based strategy miner.
In the modeling server, tester resources and capability modelers may be deployed centrally.
The database stores a historical data set, and the historical data set comprises various test data fed back by testers participating in the crowdsourcing test task, test resources of the testers and the like. The historical data set comprises a set of a plurality of crowdsourced test Task quintuples, each of which can be composed of a test Task description (and its identification) of a historical application, a set of test Reports submitted by testers, a rating of the application publisher on the quality of the test, a Task start time, and a Task end time, i.e., the crowdsourced test Task quintuple can be represented by < Task _ Id, Task, Reports, Scores, start _ time, end _ time >. After collecting the test report fed back by the tester, the score Scors of the application package sender for the test quality is respectively scored according to each test case and the test result thereof involved in the test report, Scors _ c represents the score of testing the application by using the test resource c, and comprises all the Scores of all the test cases by all the testers under the test resource c, and gives an overall score Scors _ G. The historical data set can be gradually accumulated and enriched along with the development of the numerous testing platform, and if the numerous testing platform is just on line, the historical data set is an empty set.
Before initiating crowdsourcing tests, the policy computation server may mine an optimal crowdsourcing test policy for a current crowdsourcing test task according to a historical data set, and transmit the mined crowdsourcing test policy to the sourcing server.
The distribution server formulates a targeted incentive reward according to a crowdsourcing test strategy, and then informs/distributes to each tester in the potential tester set, so that the testers can be attracted to participate in crowdsourcing test in a mode of increasing reward or service push. Finally, the expected test quality and the expected test resource coverage are improved.
In the present application, the relationship interaction diagram of each system component can refer to fig. 2, and mainly includes four parts, namely an input layer, a modeling layer, an algorithm layer, and a prediction result layer.
An input layer: the system comprises a tester resource and capability modeler, a test task requirement extractor and a crowdsourcing strategy extractor, wherein the tester resource and capability modeler is used for regularly importing a historical data set of crowdsourcing test tasks into a modeling layer, inputting task description information of an application to be tested into the test task requirement extractor in a task flow mode by taking the crowdsourcing test tasks as units, and inputting the total number k of testers to be mined, which is predefined by a system or set by an application publisher, into the crowdsourcing strategy extractor.
A modeling layer: including a tester resource and capability modeler and a test task requirements extractor.
And an algorithm layer: the system comprises a crowdsourcing strategy miner, wherein the crowdsourcing strategy miner can call a built tester model from a tester resource and capability modeler, acquire task description information of an application to be tested from a test task requirement extractor, and also can acquire the total number k of testers to be mined, wherein the total number k of testers to be mined is predefined by the system or is set by an application subcontractor.
Prediction result layer: the method is used for selecting a tester set consisting of k testers which meet the test environment coverage and can maximize the test quality, predicting the test environment coverage and the test quality, making a corresponding crowdsourcing test strategy and returning the crowdsourcing test strategy to an application subcontractor.
It should be noted that the tester referred to in this application may be a terminal device, a device providing voice and/or data connectivity to a user, a handheld device having a wireless connection function, or other processing device connected to a wireless modem. A wireless terminal, which may be a mobile terminal such as a mobile phone (or a "cellular" phone) and a computer having a mobile terminal, for example, a portable, pocket, hand-held, computer-included or vehicle-mounted mobile device, may communicate with one or more core networks via a Radio Access Network (RAN). Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, Session Initiation Protocol (SIP) phones, Wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDA). A wireless Terminal may also be referred to as a system, a Subscriber Unit (Subscriber Unit), a Subscriber Station (Subscriber Station), a Mobile Station (Mobile), a Remote Station (Remote Station), an Access Point (Access Point), a Remote Terminal (Remote Terminal), an Access Terminal (Access Terminal), a User Terminal (User Terminal), a Terminal Device, a User Agent (User Agent), a User Device (User Device), or a User Equipment (User Equipment).
In order to solve the technical problems, the application mainly provides the following technical scheme:
based on historical data accumulated by a mass testing platform, the testing capability and resources of the tester are quantitatively measured by a method of mixing a coarse-grained form of the overall capability score and a fine-grained form of the testing capability score of each testing resource held by the tester. Aiming at a certain application test task, the mutual influence relation of testers when the testers complete the application test task together is analyzed, an optimal crowdsourcing test strategy is mined by utilizing the provided multidimensional grading formula and the pigeon nest strategy formula, the crowdsourcing test strategy can take the coverage of the test environment as a constraint condition, so that the test quality is maximized, the success rate of the application test task in the crowdsourcing test environment is remarkably improved, and the application packet sender is guided to obtain the optimal test result.
The historical data set may include historical data of tests on at least one historical application, each tester being likely to have tested at least one of the historical applications.
The crowdsourcing test can directly select potential testers from the potential testers, then the potential testers perform new application or new versions of historical applications to perform testing, and also can perform testing on the tested historical applications to find more bugs and improve the applications.
Referring to fig. 2, a method for processing crowdsourced test data provided by the present application is illustrated below, where the method mainly includes:
201. and acquiring task description information of the application to be tested.
The task description information of the application to be tested is used for indicating the indication information of testing the application to be tested. Optionally, in some embodiments, the task description information is used to indicate a tester expectation value and a test index, where the tester expectation value refers to a total number of testers testing the application to be tested, and the test index includes at least one of a test quality, a test condition, and a coverage expectation value of a test resource coverage.
The test quality refers to the evaluation of the application publisher (which may also be referred to as a device initiating a crowdsourcing test) on the test quality of the tester test application, and may be measured by a quality score, or identified by a quality grade, or measured in other manners, and the specific measurement manner is not limited in the present application.
The test resource coverage is to measure how many test resources in the test resource expectation set can be covered by the test resources owned by the selected target tester set, and may be considered as a ratio of the test resource set corresponding to the currently selected tester set to the test resource expectation set. For example, if a given set of expected test resources includes 100 test resources, and 70 testers are selected, and all test resources owned by the 70 testers are 89, then the coverage of the test resources for the set of testers selected this time may be considered to be 0.89.
Optionally, in some embodiments, the task description information may further include indication information of a desired set of test resources, test resource requirement information, and test capability requirement information.
The expected set of test resources refers to a set of test resources that the application to be tested is expected to cover. For example, each tester may include different or the same or partially the same test resources, and the test resources required by the application to be tested in the test may theoretically be provided by all the selected target tester sets, that is, the target tester sets theoretically required to be selected may all cover the test resources in the test resource expectation set. For example, the application a to be tested currently may obtain the task description information of the application a to be tested: the expected set of test resources includes systems above version 5.0 of Beijing, Shanghai, China telecom, China Mobile, China Unicom, android, and systems above IOS6, etc., and then only testers meeting the requirements of the test resources need to be found according to the task description information.
The method and the device do not limit the number of testers with the same test resource subset in the target tester set, and also do not limit the number of testers with the same or similar test quality in the target tester set.
The test resource requirement information refers to a condition of a test resource which is required to be possessed by a tester participating in the crowdsourcing test task, for example, a mobile phone required to participate in the crowdsourcing test task is above 5.1 android, and a network address is beijing and shanghai.
The test capability requirement information is the test capability of the tester who requires to participate in the current crowdsourcing test task, and for example, the hardware score of the mobile phone which requires to participate in the current crowdsourcing test task is more than 10000.
The expected coverage value is a measure that the test resources owned by the target tester set desired to be selected can cover the test resources in the expected test resource set. Specifically, it may be considered to refer to a ratio of a set of test resources corresponding to a currently selected set of testers to the expected set of test resources. The coverage desired value may be set to a value tending to 1, or set to a value not less than 1. When the coverage expectation value is a value tending to 1, it indicates that a tester set including almost all or most of the test resources in the test resource expectation set is expected to be selected; when the expected value of the coverage is a value equal to 1, the expected value indicates that a tester set comprising all the test resources in the expected set of the test resources is expected to be selected; when the coverage expectation value is a value greater than 1, it indicates that it is desirable to select a tester set including all the test resources in the test resource expectation set, and a test resource set of the selected tester set. In the process of selecting the target tester, whether the coverage of the current test resource reaches the coverage expected value given in the task description information can be judged, and if the coverage reaches the coverage expected value, the operation of searching the next target tester can be finished. Of course, the operation of finding the next target tester may be continued, so that more testers suitable for crowdsourcing test applications can be selected.
202. And selecting a target tester set for testing the application to be tested from the tester sets to be selected based on the tester model sets and the task description information.
203. And generating a crowdsourcing test strategy according to the target tester set.
The crowdsourcing test strategy is a strategy for selecting a tester set which can meet the constraint condition of testing resource coverage and meet the requirement of maximizing crowdsourcing test quality and carrying out crowdsourcing test. For example, if k testers are selected in step 203, then a crowdsourcing test strategy may be generated according to the k testers in step 203, and since the k testers are selected for orientation, the crowdsourcing test strategy can achieve an oriented test effect.
204. And issuing the application to be tested to each tester in the target tester set according to the crowdsourcing test strategy.
Compared with the prior art, in the scheme provided by the application, after the task description information of the application to be tested is obtained, the target tester set for testing the application to be tested is selected from the candidate tester set based on the existing tester model set and the task description information, and then the application to be tested is issued to each tester in the target tester set according to the crowdsourcing test strategy generated by the target tester set, so that the testers perform crowdsourcing test on the application to be tested, the efficiency of selecting the testers in the target tester set can be improved to a certain extent, potential testers with good test resources and good test capability can be discovered, and the potential testers can provide better application test reports for application contractors to perform bug repair on the application.
The crowdsourcing test in the present application may be a directional test for applications of different versions at the same time, or may be an omnidirectional test for different versions of one application. During crowdsourcing testing, application testing tasks of appropriate versions can be pushed to corresponding testers according to the partitioned areas of the testing resources. The method is characterized in that only testers of specific test resources need to be selected for testing, for example, when crowdsourcing test is performed on android phones, testers of other systems only need to use android system test resources, and testers of other systems do not need to be considered, so that crowdsourcing test can be performed on testers belonging to the same test resource category respectively to discover vulnerabilities of applications under the test resource category.
Optionally, in some embodiments of the present invention, after the task description information is obtained, before the target tester set is selected, the test resource expectation set is further determined from the historical data set according to the task description information. Specifically, the test task requirement extractor may obtain the test task requirement, as shown in fig. 4, the specific process is as follows:
the inputs of the test task requirement extractor are: and ti is an application test task, wherein the ti is { t1, t2, t3, … and tm }, i belongs to {1, … and m }.
The output of the test task requirement extractor is: eti ═ Et1,Et2,Et3,…,Etm},ti∈{t1,…,tm}。
In the initialization state, initialization
Figure BDA0001295109750000131
Eti∈E。
The test task requirement extractor reads the task data stream from the buffer T of the application test task to be processed, circularly traverses each application test task ti, and matches the test resource category in the ti request by using a regular expression.
While cycling through each test resource category:
circularly extracting each expected test resource r contained in the test resource category to finally obtain the whole test resource expected set Eti,Eti=EtiAnd E, U { r }, and then the cyclic traversal operation is ended.
It should be noted that, for an application, the required test resources correspond to the test resources of the tester, and this application does not distinguish between them.
Optionally, in some embodiments of the present invention, before selecting a target tester set for testing the application to be tested from the candidate tester set based on the set of tester models and the task description information, a tester model needs to be established, which includes the following specific processes:
firstly, acquiring a historical data set, wherein the historical data set comprises the set of testers to be selected, test resources of the testers in the set of testers to be selected and test capability information of the testers, and the set of testers to be selected participates in the test history application.
Then, modeling is carried out on testers in the tester collection to be selected based on the testing resources and the testing capability information, and a collection of the tester models is obtained.
Wherein the set of tester models may include a tester model of at least one tester, the tester model including a subset of testing resources of the tester, a testing quality of the at least one application tested by the tester using the at least one testing resource, and a weighted testing quality of the at least one application tested by the tester.
After modeling the testers in the candidate tester set based on the test resources and the test capability information to obtain the tester model set, the tester model set may be updated, which mainly includes the following conditions:
and obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report.
Or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
By updating the tester model, the reference value and accuracy of the tester model can be improved.
Because the historical data set may further include a test report set fed back by each tester, a tester model is to be established, and a corresponding relationship between the tester, the test resources of the tester, and the tester capability information of the tester needs to be calculated through a cyclic traversal, and after the corresponding relationship is determined, the tester modeling operation can be performed. In the present application, the set of tester models is determined mainly by the following steps:
a. and determining the set of testers to be selected from the test report set, wherein all testers in the set of testers to be selected participate in the test task of the historical application. Optionally, the application to be tested and the historical application may be the same or different. The more crowdsourcing tests participated by a tester, the more reference value is provided for the tester model established by the tester, and when the application to be tested is the same as the historical application, the version update test for the historical application can be performed, and the retest for the historical application can also be performed; and when the application to be tested is different from the historical application, the application to be tested is tested aiming at the new application.
b. And then, calculating a first test resource set and a first test quality set according to the set of testers to be selected.
The first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the candidate tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the candidate tester set. Wherein, for a tester, the weighted test quality refers to the weighted average of at least one test quality obtained by the tester testing each application.
The tester model in the application can be established in real time when the target tester set is selected, or can be established in advance based on a historical data set. The pre-established tester model can effectively shorten the time for selecting the target tester set, thereby improving the operation efficiency.
c. And generating the set of tester models according to the first test resource set and the first test quality set.
Therefore, the modeling mechanism is based on the testing capability and testing resources of each tester in the mass testing platform to perform modeling on the coarse granularity and the fine granularity, so that the finally obtained tester model has the reference value of crowdsourcing testing and can effectively and truly predict the testing effect of each tester.
The specific modeling process may be performed by a tester resource and capability modeler.
The input to the tester resource and capability modeler is: the historical data set H of the crowdsourcing test task comprises a plurality of test report sets of tested historical applications, each historical application can be tested by a plurality of testers w, and each historical application test task corresponds to one test report set. The test report includes testers w corresponding to the test history application, and a subset Rw of test resources owned by each tester w.
The output of the tester resource and capability modeler is: a tester model for each tester w, the tester model including a subset Rw of the testing resources owned by each tester w, a testing quality for each application tested by each tester w using each testing resource owned by each tester w (which may be denoted by cscorres, e.g., C1 scors denotes testing quality for a tester using testing resource C1), and a weighted testing quality for each tester w using a subset of the testing resources owned by each tester w may be denoted by GScores).
As shown in fig. 5, the specific process is as follows:
(1) and circularly traversing the test report set corresponding to each historical application in the H, acquiring the tester set participating in the historical application test task from the test report set, forming a tester complete set U in the mass testing platform, and finishing the circular traversal.
(2) And (4) circularly traversing each tester U in the U, respectively calculating the testing resource subset Ru of each tester U in the U according to the following step (3), and calculating CScores and GScore of the tester U according to the step (4).
(3) And circularly traversing the test report set corresponding to each historical application test task in the H, taking the tester information w of each test report, and if the tester information w is completely consistent with the information of a certain tester U in the U, determining that the w and the U are the same tester, and indicating that the test report corresponding to the tester information is matched. Then, the testing resource subset Ru ═ Ru ∈ { c | c ∈ R policy resource set }, of the tester U in step (2), and the loop traversal can be ended after all testers U in U match the testing report.
(4) And (3) circularly traversing the Scors calculated by the application packager corresponding to each historical application testing task in the H, taking tester information w of each testing report, wherein the tester information w is completely consistent with the information of a certain tester U in the U, so that the condition that w and U are the same tester can be determined, and the testing report corresponding to the tester information is matched. Then, CScores _ c is the standard score (Scores _ c, CScores) and GScore is the standard score (Scores _ G, GScore), where CScores _ c1 is the quality score obtained by the tester using the test resource c1 to test the application, func is any cumulative function, such as a linear summation function, an average function, a maximum/minimum function, etc., and func can be set according to different public test scenarios, and the present application is not limited in particular. And ending the loop traversal.
(5) And establishing a tester model of the tester u according to the u, Ru, CScores and GScore, and then outputting the tester model of each tester u, wherein the tester model is a resource and capability model of the tester test application.
Optionally, the step (3) and the step (4) may be combined into the same loop traversal.
Optionally, in some inventive embodiments, the selecting, from the candidate tester set, a target tester set for testing the application to be tested based on the set of tester models and the task description information may include:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition includes that the first ratio is close to or not less than the expected coverage value.
The first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set. The first ratio is used to measure whether the currently selected tester set covers all the test resources in the expected set of test resources. When the first ratio reaches the expected coverage value given in the task description information, it indicates that the currently selected tester set covers all or most of the test resources in the expected test resource set, and the iteration may be ended.
Specifically, the scoring strategy and the pigeon nest strategy can be adopted for selecting the target tester set, in the application, the scoring strategy or the pigeon nest strategy can be adopted separately to mine the target tester set, or the two strategies can be adopted simultaneously, and the application is not limited specifically. The scoring strategy and the pigeon nest strategy given in the embodiment of the application can be realized by a crowdsourcing test strategy miner, and the crowdsourcing test strategy miner can comprise two system components, namely the scoring strategy miner and the pigeon nest strategy miner, which have different characteristics. The following are described separately:
selecting the target tester set based on a grading strategy
Specifically, firstly, a tester w is selected from the set of testees to be selected, and a test resource subset of the tester w is obtained, where the tester w refers to a tester in the set of testees to be selected.
Then, a first scoring formula is calculated by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the test resource coverage and the test quality, and the target tester refers to a tester which enables at least one of the test resource coverage and the test quality in the first scoring formula to be the largest, and at least one of the test resource coverage and the test quality can be used as the target tester or not, and one of a tester set exceeding a certain threshold value can be used as the target tester.
And after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation.
And when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and then taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
Optionally, in some embodiments, when mining the crowdsourcing test strategy based on the scoring strategy miner, a tester that maximizes the scoring formula result may be selected each time in a greedy manner based on k iterations. When mining the package sending strategy, the influence of the candidate testers on the test resource coverage and/or the test quality of the candidate tester set (namely the currently selected tester set) in multiple dimensions can be more fully analyzed. It introduces a scoring formula that measures the increment (including test resource coverage increment and/or test quality increment) that each candidate tester brings to the target tester set after being selected, from multiple dimensions. The scoring formula of the application can be added with scoring items or formula variants and the like so as to achieve the purpose of expanding to various scoring dimensions, and the specific expansion and the set scoring dimensions are not limited in the application.
Secondly, selecting the target tester set based on the pigeon nest strategy
Specifically, the set of target testers may be selected from the set of testers to be selected based on the set of tester models, the expected value of testers and the expected value of coverage, a subset of test resources of each tester in the set of testers to be selected, and a pigeon nest policy; the constraint condition includes that the first ratio is close to or not less than the expected coverage value.
The first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set because there may be testers with overlapped test resources in the actual tester set.
Optionally, in some embodiments, when the target tester set is selected based on the pigeon nest policy, an iterative algorithm may be used to select a target tester from the candidate tester set, where the target tester refers to a candidate tester that provides the largest test quality increment for the candidate tester set in each iterative computation.
And then adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set. When the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, it is equivalent to completely covering the resources in the test resource expectation set, and even exceeding, it can be considered to satisfy: the first ratio is about to or not less than the desired coverage value. Therefore, the iterative computation can be ended, and the candidate tester set obtained after the iterative computation is ended is used as the target tester set.
And after the target tester set is obtained based on a scoring strategy or a pigeon nest strategy, the intersection of the test resource set corresponding to the target tester set and the test resource expectation set is the test resource expectation set.
It can be seen that, based on the pigeon nest strategy, the testers with the maximum quality can be selected each time from the candidate testers that satisfy the pigeon nest principle. When the crowdsourcing test strategy is mined, the influence of each candidate tester in the to-be-selected tester set on the selected candidate tester set in the test resource coverage can be more fully analyzed, and finally, the tester set which meets the crowdsourcing test task can be mined.
Optionally, in some embodiments, when mining the crowdsourcing test strategy based on the pigeon nest strategy miner, the candidate tester capable of maximizing the test quality is selected from the candidate testers meeting the pigeon nest principle in each iteration in a greedy manner based on k iterations, and is used as the target tester. When mining crowdsourced test strategies, the influence of candidate testers on a candidate tester set in the coverage of a test environment can be more fully analyzed. When the candidate testers are screened based on the pigeon nest principle, the candidate testers only need to be screened from the candidate testers meeting the pigeon nest principle.
The grading strategy excavator and the pigeon nest strategy excavator are suitable for different actual scenes, can be carried out simultaneously, can also cooperate to form complementation, and can be dynamically and automatically selected to be deployed according to requirements. The scoring strategy miner can support more complex personalized targets besides testing environment coverage and testing quality. The pigeon nest strategy digger can reach 100% of test environment coverage under certain conditions, and on the basis of the conditions, the test quality is maximized. In practical application, the system can be flexibly deployed, and the specific application is not limited.
The scoring strategy miner and the pigeon nest strategy miner are illustrated separately below.
Mining target tester set based on scoring strategy miner
Specifically, the scoring strategy digger provided by the embodiment of the application can gradually dig out the optimal tester set in a greedy matching mode of the regular expression and in a k-time iteration mode. k is the total number of testers for testing the application to be tested in the task description information, so that k iterations are required, one target tester (i.e. the best tester) is selected from the current set of testers to be selected in each iteration and added into the current set of candidate testers Wo, and after k iterations are selected, the Wo can include k best testers. The criteria for selecting the target tester is to maximize the increment of test quality of the candidate tester set Wo and cover the maximum number of test environments to be covered in E.
To be able to mine the target testers in each iteration, a scoring formula may be introduced to measure the score of each candidate tester, i.e., how much of the benefit it brings to the set of target testers. Assume that the expected set of test resources indicated in the task description is Et, and W is the set of testers in the historical data set. Pre-screening a suitable candidate tester set W from W1Then, the test resource coverage and the test quality of the candidate testers can be measured through a scoring formula, and a weight can be respectively set for the test resource coverage and the test qualityAnd calculating the weighted values of the two according to the scoring formula, and selecting the candidate with the largest weighted value as the target tester by directly adopting the calculated weighted values when the target tester is selected in each iteration.
Therefore, based on the scoring strategy, an effective target tester set is obtained on the premise of ensuring the operation efficiency, so that the influence of the testers to be selected on the selected set in multiple dimensions can be more fully analyzed when the package sending strategy is mined.
The present application provides a form of the first scoring formula described above, as follows:
Figure BDA0001295109750000171
wherein alpha is the weight of coverage of the test resource, (1-alpha) is the weight of the test quality, and alpha belongs to [0,1 ]],C0Refers to a set of test resources to be covered in the expected set Et of test resources, Wo refers to a set of candidate testers, W' refers to a candidate tester in the set of candidate testers, W1Refers to the set of testers to be selected,
Figure BDA0001295109750000172
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure BDA0001295109750000173
means W1Candidate tester W' in (1) is the maximum test resource increment, σ ({ ω }, W, provided by Co0)+Refers to the test mass increment provided by tester w for Wo; sigma ({ omega' }, W0) Refers to the test mass increment, max, provided by the candidate tester w' to Woω′∈W1σ({ω′},W0)+Means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo.
As can be seen, the first part of the first scoring formula measures the candidate tester W against the set of candidate testers W0The contribution in test coverage, the second part measures the contribution of tester W in test quality, and the denominator of each part is W1The maximum value of the contribution of all the candidate testers on the second part. The first scoring formula is introduced, so that the benefit of each candidate tester brought to the candidate tester set after being selected can be measured from multiple dimensions, and testers which can bring better test effect to crowdsourcing test can be identified. And the first scoring formula can be flexibly expanded to various scoring dimensions.
The score of each tester w in Wo can be calculated by the first scoring formula described above.
In the limit, if
Figure BDA0001295109750000181
Is 0, then
Figure BDA0001295109750000182
It is 1.
If maxω′∈W1σ({ω′},W0)+) Is 0, then
Figure BDA0001295109750000183
) It is 1.
As shown in fig. 6, for the selection of k testers, the specific steps of calculating the crowdsourcing test strategy by the scoring strategy miner are as follows:
(1) first selection initializes the candidate tester set Wo,
Figure BDA0001295109750000184
and setting C0=Et。
(2) Executing k iterations, selecting the candidate tester with the best first scoring formula in each iteration, taking the best candidate tester as a target tester, adding the best candidate tester into the candidate tester set Wo, and deleting C0Of the test resource that has been covered.
(3) After k iterations, if | Wo | ≧ k, this indicates that all of the desired test environments Ei in Et have been covered by the testers in current candidate tester set Wo, even if the intersection of the set of actual test resources of the testers in current candidate tester set Wo with Et is a subset of Wo, then returning current candidate tester set Wo as the target tester set; if | Wo | < k, this indicates that all of the desired test environments Ei in Et have not been completely covered by testers in the current candidate tester set Wo, then the current candidate tester set Wo and the test resource coverage it can achieve are returned.
In order to improve the possibility that the scoring strategy miner can find a Wo completely covering Et from W, an additional scoring item can be added in the first scoring formula for expansion, so that the goals required by other crowdsourcing testing tasks can be achieved besides supporting the testing resource coverage and the testing quality.
Second, mining target tester set based on pigeon nest strategy miner
Specifically, the strategic digger of pigeon nest in the embodiment of the application is based on the pigeon nest principle, and the pigeon nest principle is as follows:
if the final output target set of testers Wo can completely cover a given expected set Et of test resources, Wo may contain k testers. Then at least one tester in Wo can cover no less than the test resource
Figure BDA0001295109750000185
A one, wherein
Figure BDA0001295109750000186
Is an ceiling function, i.e. the coverage in Wo can be higher than
Figure BDA0001295109750000187
The number of testers of the test resource of (1) is at least one.
Based on the above principle, as shown in fig. 7, for the selection of k testers, the specific steps of the pigeon nest strategy excavator in the embodiment of the present application are as follows:
(1) the candidate tester set Wo is first initialized so that Wo, which is the candidate tester set, is updated in subsequent iterative computations.
Wherein, at the time of initialization,
Figure BDA0001295109750000188
and is provided with C0When a tester is selected as a target tester, at least one test resource in the test resource subset of the target tester may be considered as a desired test resource in the test resource expectation set.
(2) And executing k iterations, and selecting a target tester which meets the pigeon nest principle and can bring the maximum increment to the test quality of the current candidate tester set Wo in the process of each iteration. If no tester meeting the pigeon nest principle exists, returning the current Wo and the test coverage which can be reached by Wo relative to Et; and (4) if the testers meeting the pigeon nest principle exist currently, performing the step (3).
For example, after the 3 rd iteration is completed, Wo already contains 3 target testers, when selecting the next target tester at the 4 th iteration, based on Wo currently containing 3 target testers, one tester a that satisfies the pigeon nest principle and can bring the largest test quality increment to Wo currently containing 3 target testers is selected, and the tester a is used as the target tester selected at the 4 th iteration and is added to Wo as the candidate tester set, so that Wo is updated to a set containing 4 target testers. And selecting a target tester every iteration, updating Wo as a candidate tester set, and selecting the target tester every iteration, which is not described herein again.
(3) After each of the k iterations, the following decision is made: judging whether all expected test environments in the Et are covered by testers in the current candidate tester set Wo, if | Wo | ≧ k, indicating that all expected test resources in the Et are covered by testers in the current candidate tester set Wo, returning the finally obtained Wo as a target tester set by the algorithm; if | Wo | < k, this indicates that all of the desired test resources in Et have not been completely covered by testers in current set of candidate testers Wo, then the test coverage that can be achieved by current Wo and Wo with respect to Et is returned.
When k ≧ Et | where | Et | is the total number of test resources in test resource expectation set Et, the pigeon nest policy miner always can dig out a target tester set Wo of k testers, satisfying the constraint target that it as a whole can completely cover the expected test resource expectation set Et, if such a set does exist in W.
For ease of understanding, the measurement platform a is used as an example below. Fig. 8 is a schematic diagram of a Server topology structure of the public testing platform a, where the public testing platform a includes a background processing Server, a web service (web service) Server, and a database, and the public testing platform a may use a Client/Server structure (Client/Server structures, for short, C/S) to provide public testing services for testers. The application subcontractor and the tester install the applications to be tested, the application subcontractor launches a crowdsourcing test task aiming at the applications, and each test report is scored after the test report fed back by the tester is received. The webservice server is a general interface server of a server, in which a plurality of web page (web) services are deployed to serve testers and application package senders, and the processing procedures of the policy calculation server and the package sending server are combined inside the webservice server. The background processing server combines the functions of the test result server and the modeling server, and comprises a tester resource and capability modeler and a test task requirement extractor.
The mining process of the crowdsourcing test strategy is described below by taking a historical data set in the crowdsourcing platform a as an example. In the historical data set, 836 historical test tasks are provided, each test task corresponds to one historical application, and 2385 testers participate in the test.
Firstly, a tester resource and capability modeler is used for modeling the test resource and the test capability of each tester to obtain a 2385 tester model set. For example, 4 testers are selected for illustration, that is, the set W of testers to be selected is { W1, W2, W3, W4 }.
The tester resource and ability modeler analyzes the historical test task and the historical test result, the tester is a mobile phone, the tester resource comprises a tester address, a tester operating system, a network system and a mobile phone brand, and after analysis, the output result is as follows:
Rw1beijing, Android 4.1, china unicom, hua is a mobile phone }, w1. csccores ═ 1,0.5,0.5,0.4}
Rw21, 2. csccores ═ 0.3,1,0.5,0.4,0.3}
Rw3(chengdu, Android 4.3, china unicom, associative mobile phone }, w3. csccores ═ 0.2,0.4,1,0.4}
Rw41,0.4,0.5,1}, 4. csccores ═ 1,0.4, 1}
Wherein R isw1Represents the set of test resources for tester w1, w1.CScores represents the test quality score for tester w1, and so on.
The overall score calculation method is assumed to be the minimum value in the test resource scores. Then it is possible to obtain:
w1.gscore ═ 0.4, w2.gscore ═ 0.3, w3.gscore ═ 0.2, and w4.gscore ═ 0.4, where w1.gscore represents the overall test quality score of tester w1, other similar.
The test task requirement extractor extracts task description information of the crowdsourcing test task A from the crowdsourcing test task flow to be issued, and according to the task description information of the crowdsourcing test task A, the total number k (taking k as 2 as an example) of testers expecting to test the application of the crowdsourcing test task A can be calculated, and a test resource expectation set Et of the crowdsourcing test task A is calculated, wherein Et is { Beijing, Chengdu, Android 4.4, Android 4.3, China Mobile }. Assuming that the score calculation method of one tester set is the sum of Gscore scores of all testers, the following description will be made respectively for the case that a score strategy excavator and a pigeon nest strategy excavator are adopted to dig a target tester set containing 2 target testers.
Firstly, a grading strategy digger is adopted to dig a target tester set.
Since k is 2, only 2 iterations are needed to pick 2 candidate testers. Assuming that the weight alpha of the coverage of the test resource is 0.5, the scoring calculation is carried out by adopting the scoring formula, and in the initial state, the candidate tester set
Figure BDA0001295109750000201
C0Et, the current candidate set W ═ { W1, W2, W3, W4 }. After the first iteration is finished, the result of the first iteration is as follows:
for tester w1: Score (w1, Wo, C)0)=0.5(1/3)+(1-0.5)(0.4/0.4)=0.67
For tester w2: Score (w2, Wo, C)0)=0.5(2/3)+(1-0.5)(0.3/0.4)=0.705
For tester w3: Score (w3, Wo, C)0)=0.5(2/3)+(1-0.5)(0.2/0.4)=0.583
For tester w4: Score (w4, Wo, C)0)=0.5(3/3)+(1-0.5)(0.4/0.4)=1
Where Wo represents the set of candidate testers, C0Representing a set of test resources to be covered in the expected set of test resources. Score (w1, Wo, C)0) The test resources and tests representing tester w1 bring the quality score for candidate tester set Wo, and the subset of test resources for tester w1 relative to C0The resource coverage score of the test, other same.
It can be seen that of the testers (w1, w2, w3, w4), the highest scoring tester is w4, which in the first iteration brings the greatest test quality increment to Wo, and for C0Resulting in a maximum test resource coverage increment. Therefore, tester w4 may be targeted tester w.
And, in this iteration, the current set of test resources C to be covered0=C0\\ Rw ═ Chengdu, Android 4.3 }; wo ═ u { w } ═ w4 }; w1 ═ W1, W2, W3. Since | Wo | ═ k, we doEntering a second iteration, wherein the result of the second iteration is as follows:
for tester w1: Score (w1, Wo, C)0)=0.5(0/2)+(1-0.5)(0.4/0.4)=0.6
For tester w2: Score (w2, Wo, C)0)=0.5(0/2)+(1-0.5)(0.3/0.4)=0.375
For tester w3: Score (w3, Wo, C)0)=0.5(2/2)+(1-0.5)(0.2/0.4)=0.75
Similar to the result analysis of the first iteration, it can be known that in the second iteration, the tester w with the highest score in the candidate tester sets (w1, w2, w3) is tester w3, and the current test resource set C to be covered is0=C0\Rw{ }; wo ═ coot { w } ═ w4, w3 }; the current set W of testers to be selected is { W1, W2 }.
It can be seen that after the second iteration, the total number of testers | Wo | ═ k contained in Wo, since
Figure BDA0001295109750000216
w ∈ Wo, and returns (Wo,1). From the output Wo, we can know that the test resource coverage achieved by Wo is 1, and the test quality achieved by Wo is as follows: w4.gscore + w3.gscore ═ 0.8. Therefore, the iteration loop can be exited after the second iteration.
Thus, the output crowdsourcing test strategy is: the tester w3 and the tester w4, i.e. the target tester set is { w4, w3}, the test environment coverage which can be achieved by the tester combination { w4, w3} is 100%, and the test quality is: 0.8.
and secondly, mining a target tester set by adopting a pigeon nest strategy miner.
Since k is 2, only 2 iterations are needed to pick 2 candidate testers. Assuming that the weight α of the coverage of the test resource is 0.5, in the initial state, the candidate tester set
Figure BDA0001295109750000217
C0Et, the current candidate set W ═ { W1, W2, W3, W4 }.
After the first iteration is finished, the result of the first iteration is as follows:
due to the fact that
Figure BDA0001295109750000218
Thus satisfying
Figure BDA0001295109750000212
The tester of conditions is only w4, so: the first iteration may select w4 as the target tester.
Therefore, the target tester w obtained in the first iteration is w 4; c0=C0\\ Rw ═ Chengdu, Android 4.3 }; wo ═ u { w } ═ w4 }; w1 ═ W1, W2, W3.
Since | Wo | ═ k, the second iteration is entered:
due to the fact that
Figure BDA0001295109750000219
Thus satisfying
Figure BDA0001295109750000214
Conditional testers only have w3, the first iteration may select w3 as the target tester.
Thus w — w 3; c0=C0\Rw={},Wo=Wo∪{w}={w4,w3},W={w1,w2}。
At this point, | Wo | ═ k, the iterative loop exits. Because of the fact that
Figure BDA0001295109750000215
w ∈ Wo, indicating that the last output Wo can cover all the test resources in Et, so (Wo,1) is returned, from the output Wo, Wo can reach a test resource coverage of 1, and the test quality can be reached is: w4.gscore + w3.gscore ═ 0.8.
Thus, the output crowdsourcing test strategy is: the tester w3 and the tester w4, i.e. the target tester set is { w4, w3}, the test environment coverage which can be achieved by the tester combination { w4, w3} is 100%, and the test quality is: 0.8.
the features of the test resource expectation set, the test quality, the test resource coverage, the test capability information, the tester model, the tester set, the tester resource subset, the crowdsourcing test strategy, the scoring strategy, and the pigeon nest strategy are also applicable to the embodiments corresponding to fig. 9 and 10 in the present application, and the subsequent similar parts are not repeated.
A method for processing crowdsourced test data in the present application is described above, and a device for performing the method for processing crowdsourced test data is described below, and the device can perform the technical solution shown in any one of fig. 2 to 8. Referring to fig. 9, a device 90 for processing crowdsourcing test data is described, where the device 90 may be a server serving as a server, or an application deployed on the server, or may be referred to as a client, and the device 90 may also be a terminal device serving as a server, and an application scenario of the device is not limited in the present application. The apparatus 90 comprises:
the receiving and sending module 901 is configured to obtain task description information of an application to be tested, where the task description information is used to indicate that crowdsourcing testing is performed on the application to be tested.
A processing module 902, configured to select a target tester set for testing the application to be tested from a tester set to be selected based on a set of tester models and the task description information obtained by the transceiver module 901; and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set by a receiving and sending module according to the crowdsourcing test strategy.
In the embodiment of the present invention, the processing module 902 selects a target tester set for testing the application to be tested from the candidate tester set based on the task description information and a set of existing tester models, and then issues the application to be tested to each tester in the target tester set according to a crowdsourcing test policy generated by the target tester set, so as to improve the efficiency of selecting testers in the target tester set, and discover potential testers with good test resources and good test capability, which can provide better application test reports for application packet developers to provide application packet developers for bug repair.
Optionally, in some invention embodiments, the task description information includes a tester expected value and a test index, the tester expected value refers to a total number of testers for testing the application to be tested, and the test index includes at least one of a test quality and a coverage expected value of a test resource coverage.
The crowdsourcing test strategy is a strategy for selecting a tester set which can meet the constraint condition of testing resource coverage and meet the requirement of maximizing crowdsourcing test quality and performing crowdsourcing test.
Optionally, the task description information further includes indication information of a desired set of test resources.
Optionally, in some inventive embodiments, the set of tester models includes at least one tester model of a tester, the tester model including a subset of testing resources of the tester, a testing quality of the tester testing at least one application using the at least one testing resource, and a weighted testing quality of the tester testing the at least one application.
In this embodiment of the application, the processing module 902 may select the target tester set based on a scoring policy and a pigeon nest policy, may separately adopt the scoring policy or the pigeon nest policy to mine the target tester set, and may also adopt both policies, which is not limited in this application. The following are described separately:
optionally, in some embodiments of the present invention, when the target tester set is obtained based on the scoring policy, the processing module 902 is specifically configured to:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
Optionally, in some embodiments of the present invention, the processing module 902 is specifically configured to:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the coverage of the test resources and the test quality, and the target tester is the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest;
after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation;
and when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set. As can be seen, the processing module 902 obtains an effective target tester set on the premise of ensuring the operation efficiency based on the scoring policy, so that the influence of the tester to be selected on the selected set in multiple dimensions can be more fully analyzed when the package issuing policy is mined.
Optionally, in some inventive embodiments, the first scoring formula is as follows:
Figure BDA0001295109750000231
wherein alpha is the weight of coverage of the test resource, (1-alpha) is the weight of the test quality, and alpha belongs to [0,1 ]],C0Refers to a test resource set to be covered in the test resource expectation set, Wo refers to a candidate tester set, and w' refers to the candidate tester setCandidate tester of (1), W1Refers to the set of testers to be selected,
Figure BDA0001295109750000232
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure BDA0001295109750000233
means W1Candidate tester W' in (1) is the maximum test resource increment, σ ({ ω }, W, provided by Co0)+Refers to the test mass increment provided by tester w for Wo; sigma ({ omega' }, W0) Refers to the test mass increment, max, provided by the candidate tester w' to Woω′∈w1σ({ω′},W0)+Means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo. Therefore, after the first scoring formula is introduced, the benefits of each candidate tester on the candidate tester set after being selected can be measured from multiple dimensions, and thus the tester which can bring better test effect on crowdsourcing test is identified. The first scoring formula of the application can be added with scoring items or formula variants and the like so as to achieve the purpose of expanding to various scoring dimensions, and the specific expansion and the set scoring dimensions are not limited in the application.
Optionally, in some invention embodiments, when obtaining the target tester set based on the pigeon nest policy, the processing module 902 is specifically configured to:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
Optionally, in some embodiments of the present invention, the processing module 902 is specifically configured to:
selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is a candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation;
adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set;
and when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set. It can be seen that, based on the pigeon nest strategy, the testers with the maximum quality can be selected each time from the candidate testers that satisfy the pigeon nest principle. When the crowdsourcing test strategy is mined, the influence of each candidate tester in the to-be-selected tester set on the selected candidate tester set in the test resource coverage can be more fully analyzed, and finally, the tester set which meets the crowdsourcing test task can be mined.
The processor 902 obtains a target tester set based on the above scoring policy or this pigeon nest policy, which can satisfy the expectation, that is, the intersection of the test resource set corresponding to the target tester set and the test resource expectation set is the test resource expectation set.
Optionally, in some inventive embodiments, before the selecting a target tester set for testing the application to be tested from the candidate tester set based on the set of tester models and the task description information, the processing module 902 is further configured to:
acquiring a historical data set, wherein the historical data set comprises test resources of testers in a to-be-selected tester set and test capability information of the testers;
modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection. Therefore, the time for selecting the target tester set can be effectively shortened based on the tester model, and the operation efficiency is improved.
Optionally, in some embodiments of the present invention, the historical data set further includes a test report set fed back by the tester, and the processing module 902 is specifically configured to:
determining the set of testers to be selected from the test report set;
calculating a first test resource set and a first test quality set according to the set of testers to be selected;
the first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the candidate tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the candidate tester set.
The set of tester models is then generated from the first set of test resources and the first set of test qualities.
Therefore, the modeling mechanism is based on the testing capability and testing resources of each tester in the mass testing platform to perform modeling on the coarse granularity and the fine granularity, so that the finally obtained tester model has a higher reference value of crowdsourcing testing and can effectively and truly predict the testing effect of each tester.
Optionally, in some inventive embodiments, after modeling the testers in the candidate tester set based on the test resources and the test capability information to obtain the tester model set, the processing module 902 further performs at least one of the following steps:
obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report;
or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
It should be noted that, in the embodiments of the present application (including the embodiments shown in fig. 9), the entity device corresponding to the transceiver module may be a transceiver, and the entity devices corresponding to all the processing modules may be processors. Each device shown in fig. 9 may have a structure as shown in fig. 10, when one of the devices has the structure as shown in fig. 10, the processor and the transceiver in fig. 10 implement the same or similar functions of the processing module and the transceiver module provided in the device embodiment corresponding to the device, and the memory in fig. 10 stores program codes that the processor needs to call when executing the method of data processing.
Fig. 11 is a schematic diagram of a server 1100 according to an embodiment of the present invention, which may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and a memory 1132, and one or more storage media 1130 (e.g., one or more mass storage devices) for storing applications 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100.
The server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 11. Specifically, the cpu 1122 performs the following operations by calling program instructions:
task description information of an application to be tested is obtained through an input/output interface 1158, and the task description information is used for indicating crowdsourcing test of the application to be tested.
Selecting a target tester set for testing the application to be tested from the tester sets to be selected based on the tester model set and the task description information acquired by the transceiver module;
and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set by a receiving and sending module according to the crowdsourcing test strategy.
Optionally, the task description information includes a tester expected value and a test index, where the tester expected value is a total number of testers testing the application to be tested, and the test index includes at least one of a test quality and a coverage expected value of a test resource coverage.
The crowdsourcing test strategy is a strategy for selecting a tester set which can meet the constraint condition of testing resource coverage and meet the requirement of maximizing crowdsourcing test quality and performing crowdsourcing test.
The task description information may also include indication information of a desired set of test resources.
Optionally, the set of tester models includes at least one tester model of a tester, and the tester model includes a subset of testing resources of the tester, a testing quality of the tester testing at least one application using the at least one testing resource, and a weighted testing quality of the tester testing at least one application.
Optionally, in some embodiments of the present invention, the central processing unit 1122 performs the following operations by calling a program instruction:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition includes that the first ratio is close to or not less than the expected coverage value.
The first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
Optionally, in some embodiments of the present invention, the central processor 1122 specifically executes the following operations by calling a program instruction:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the coverage of the test resources and the test quality, and the target tester is the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest;
after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation;
and when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
Optionally, in some embodiments of the present invention, the central processor 1122 specifically executes the following operations by calling a program instruction:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
Optionally, in some embodiments of the present invention, the central processor 1122 specifically executes the following operations by calling a program instruction:
selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is a candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation;
adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set;
and when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
Optionally, in some embodiments of the present invention, before the selecting a target tester set for testing the application to be tested from the candidate tester set based on the set of tester models and the task description information, the central processor 1122 further executes the following operations by calling a program instruction:
and acquiring a historical data set through the input/output interface 1158, wherein the historical data set comprises the test resources of the testers in the to-be-selected tester set and the test capability information of the testers.
Modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection.
Optionally, in some embodiments of the present invention, the historical data set further includes a test report set fed back by the tester, and the central processor 1122 specifically performs the following operations:
determining the set of testers to be selected from the test report set;
calculating a first test resource set and a first test quality set according to the set of testers to be selected;
the first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the to-be-selected tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the to-be-selected tester set;
generating the set of tester models from the first set of test resources and the first set of test qualities.
Optionally, in some embodiments of the present invention, the historical data set further includes a test report set fed back by the tester, and after the central processor 1122 models the tester in the candidate tester set based on the test resource and the test capability information to obtain the set of tester models, at least one of the following operations is further performed:
obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report;
or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
The present application also provides a computer storage medium, which stores a program, and the program includes when executed, some or all of the steps of the method for processing crowdsourcing test data.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided by the present application are introduced in detail, and the present application applies specific examples to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (22)

1. A method of processing crowdsourced test data, the method comprising:
acquiring task description information of an application to be tested, wherein the task description information is used for indicating crowdsourcing test on the application to be tested;
selecting a target tester set for testing the application to be tested from a tester set to be selected based on a tester model set and the task description information, wherein the tester model set comprises at least one tester model of a tester, the tester model comprises a test resource subset of the tester, test quality of at least one application tested by the tester by using at least one test resource, and weighted test quality of at least one application tested by the tester, the task description information comprises a tester expected value and a test index, the tester expected value refers to the total number of testers for testing the application to be tested, the test index comprises the test quality and a coverage expected value of test resource coverage, the test quality refers to evaluation of the tester test quality by an application subcontractor, and the task description information further comprises indication information of the test resource expected set, the test resource coverage refers to a ratio of test resources owned by the target tester set to the test resource expected set;
and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set according to the crowdsourcing test strategy, wherein the crowdsourcing test strategy is a strategy of the tester set for selecting a constraint condition which can meet the coverage of test resources and meet the requirement of maximizing crowdsourcing test quality and carrying out crowdsourcing test.
2. The method of claim 1, wherein selecting a target tester set for testing the application to be tested from the candidate tester sets based on the set of tester models and the task description information comprises:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
3. The method of claim 2, wherein selecting the target set of testers from the selected set of testers based on the set of tester models, the expected value of testers and the expected value of coverage, the subset of test resources of each tester in the selected set of testers, the intersection between the subset of test resources of each tester in the selected set of testers, and the constraint of coverage of the test resources comprises:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the coverage of the test resources and the test quality, and the target tester is the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest;
after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation;
and when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
4. The method of claim 3, wherein the first scoring formula is as follows:
Figure DEST_PATH_IMAGE002
where α is the weight of the coverage of the test resource, (1- α) is the weight of the test quality, α ϵ [0,1], C0Refers to a test resource set to be covered in the test resource expectation set, Wo refers to a candidate tester set, W' refers to a candidate tester in the candidate tester set, W1Refers to the set of testers to be selected,
Figure DEST_PATH_IMAGE004
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure DEST_PATH_IMAGE006
means W1Candidate tester w' in (a) is the maximum test resource increment provided by Co,
Figure DEST_PATH_IMAGE008
refers to the test mass increment provided by tester w for Wo;
Figure DEST_PATH_IMAGE010
refers to the test quality increment provided by candidate tester w' for Wo,
Figure DEST_PATH_IMAGE012
means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo.
5. The method of claim 1, wherein selecting a target tester set for testing the application to be tested from the candidate tester sets based on the set of tester models and the task description information comprises:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
6. The method of claim 5, wherein selecting the set of target testers from the set of candidate testers based on the set of tester models, the expected value of testers and the expected value of coverage, a subset of test resources for each tester in the set of candidate testers, and a pigeon nest policy comprises:
selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is a candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation;
adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set;
and when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
7. The method of claim 5 or 6, wherein the intersection of the test resource set corresponding to the target tester set and the test resource expectation set is the test resource expectation set.
8. The method of claim 1, wherein prior to the selecting a set of target testers from the set of candidate testers to test the application to be tested based on the set of tester models and the task description information, the method further comprises:
acquiring a historical data set, wherein the historical data set comprises test resources of testers in a to-be-selected tester set and test capability information of the testers;
modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection.
9. The method of claim 8, wherein the historical data set further includes a test report set fed back by testers, and the modeling the testers in the candidate tester set based on the test resources of the testers in the candidate tester set and the test capability information of the testers in the candidate tester set to obtain the set of tester models includes:
determining the set of testers to be selected from the test report set;
calculating a first test resource set and a first test quality set according to the set of testers to be selected;
the first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the to-be-selected tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the to-be-selected tester set;
generating the set of tester models from the first set of test resources and the first set of test qualities.
10. The method according to claim 8 or 9, characterized in that at least one of the number of historical applications, the application under test is the same or different from the historical application.
11. The method of claim 8 or 9, wherein after modeling the set of tester models based on the test resources and the test capability information for the tester in the set of candidate testers, the method further comprises at least one of:
obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report;
or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
12. An apparatus for processing crowdsourced test data, the apparatus comprising:
the system comprises a receiving and sending module, a task processing module and a task processing module, wherein the receiving and sending module is used for obtaining task description information of an application to be tested, and the task description information is used for indicating crowdsourcing test on the application to be tested;
a processing module, configured to select a target tester set for testing the application to be tested from a candidate tester set based on a set of tester models and the task description information obtained by the transceiver module, where the set of tester models includes at least one tester model of a tester, the tester model includes a subset of testing resources of a tester, a testing quality of the at least one application tested by the tester using the at least one testing resource, and a weighted testing quality of the at least one application tested by the tester, the task description information includes a tester expected value and a test index, the tester expected value refers to a total number of testers testing the application to be tested, the test index includes at least one of a testing quality and a coverage expected value of a coverage of the testing resource coverage, and the testing quality refers to an evaluation of the testing quality of the application tested by an application publisher of the tester, the task description information further comprises indication information of a testing resource expectation set, and the testing resource coverage refers to a ratio of testing resources owned by the target tester set to the testing resource expectation set;
and generating a crowdsourcing test strategy according to the target tester set, and issuing the application to be tested to each tester in the target tester set by a transceiver module according to the crowdsourcing test strategy, wherein the crowdsourcing test strategy is a strategy of the tester set for selecting a constraint condition which can meet the coverage of test resources and meet the requirement of maximizing crowdsourcing test quality and carrying out crowdsourcing test.
13. The apparatus of claim 12, wherein the processing module is specifically configured to:
selecting the target tester set from the tester set to be selected based on the tester model set, the tester expected value and the coverage expected value, the test resource subset of each tester in the tester set to be selected, the intersection between the test resource subsets of each tester in the tester set to be selected, and the constraint condition of the test resource coverage; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of an actual test resource set of a currently selected tester set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
14. The apparatus of claim 13, wherein the processing module is specifically configured to:
selecting a tester w from the set of testers to be selected, and acquiring a test resource subset of the tester w, wherein the tester w is a tester in the set of testers to be selected;
calculating a first scoring formula by using an iterative algorithm to obtain a target tester, wherein the first scoring formula represents a weighted value of the coverage of the test resources and the test quality, and the target tester is the tester which enables at least one of the coverage of the test resources and the test quality in the first scoring formula to be the largest;
after a target tester is obtained in each iteration, adding the target tester into a candidate tester set, and performing next iteration calculation;
and when the first ratio tends to 1 or is more than or equal to 1, ending the iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
15. The apparatus of claim 14, wherein the first scoring formula is as follows:
Figure 313085DEST_PATH_IMAGE002
where α is the weight of the coverage of the test resource, (1- α) is the weight of the test quality, α ϵ [0,1], C0Refers to a test resource set to be covered in the test resource expectation set, Wo refers to a candidate tester set, W' refers to a candidate tester in the candidate tester set, W1Refers to the set of testers to be selected,
Figure DEST_PATH_IMAGE013
is the test resource subset of the tester W and the set W of the testers to be selected1The intersection of the two lines of intersection of the two lines,
Figure 845698DEST_PATH_IMAGE006
means W1Candidate tester w' in (a) is the maximum test resource increment provided by Co,
Figure 310177DEST_PATH_IMAGE008
refers to the test mass increment provided by tester w for Wo;
Figure 884116DEST_PATH_IMAGE010
refers to the test quality increment provided by candidate tester w' for Wo,
Figure 143059DEST_PATH_IMAGE012
means W1Candidate tester w' in (1) is the maximum test quality increment provided by Wo.
16. The apparatus of claim 12, wherein the processing module is specifically configured to:
selecting the target tester set from the candidate tester set based on the set of tester models, the tester expected values and the coverage expected values, the test resource subset of each tester in the candidate tester set, and the pigeon nest policy; the constraint condition comprises that the first ratio is close to or not less than the expected coverage value;
the first ratio is a ratio of a currently selected actual test resource set to the test resource expectation set, and the actual test resource set is a union of test resource subsets of each tester in the target tester set.
17. The apparatus of claim 16, wherein the processing module is specifically configured to:
selecting a target tester from the candidate tester set by using an iterative algorithm, wherein the target tester is a candidate tester which provides the maximum test quality increment for the candidate tester set in each iterative calculation;
adding the target tester into a candidate tester set, and calculating a real-time testing resource set corresponding to the candidate tester set;
and when the intersection of the real-time test resource set and the test resource expectation set is the test resource expectation set, ending iterative computation, and taking a candidate tester set obtained after the iterative computation is ended as the target tester set.
18. The apparatus of claim 12, wherein the processing module, prior to the selecting a target set of testers to test the application to be tested from the set of candidate testers based on the set of tester models and the task description information, is further configured to:
acquiring a historical data set, wherein the historical data set comprises test resources of testers in a to-be-selected tester set and test capability information of the testers;
modeling testers in the tester collection to be selected based on the test resources and the test capability information to obtain the tester model collection.
19. The apparatus of claim 18, wherein the historical data set further comprises a set of test reports fed back by the tester, and wherein the processing module is specifically configured to:
determining the set of testers to be selected from the test report set;
calculating a first test resource set and a first test quality set according to the set of testers to be selected;
the first testing resource set comprises a plurality of testing resource subsets, and each testing resource subset corresponds to one tester; the first testing quality set comprises testing quality of at least one application tested by each tester in the to-be-selected tester set by utilizing at least one testing resource and weighted testing quality of at least one application tested by each tester in the to-be-selected tester set;
generating the set of tester models from the first set of test resources and the first set of test qualities.
20. The apparatus of claim 18 or 19, wherein the processing module, after modeling the testers in the candidate tester set based on the test resources and the test capability information to obtain the set of tester models, further performs at least one of the following steps:
obtaining a test report newly fed back by at least one tester in the set of tester models, and updating the tester model corresponding to the tester according to the newly fed back test report;
or obtaining a test report fed back by the new tester, and modeling according to the test report fed back by the new tester to obtain a tester model corresponding to the new tester so as to update the set of tester models.
21. An apparatus for processing crowdsourced test data, the apparatus comprising at least one connected processor, memory, transmitter and receiver, wherein the memory is configured to store program code and the processor is configured to invoke the program code in the memory to perform the method of any one of claims 1-11.
22. A computer storage medium characterized in that it comprises instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-11.
CN201710340474.4A 2017-05-15 2017-05-15 Method and device for processing crowdsourcing test data Active CN108874655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710340474.4A CN108874655B (en) 2017-05-15 2017-05-15 Method and device for processing crowdsourcing test data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710340474.4A CN108874655B (en) 2017-05-15 2017-05-15 Method and device for processing crowdsourcing test data

Publications (2)

Publication Number Publication Date
CN108874655A CN108874655A (en) 2018-11-23
CN108874655B true CN108874655B (en) 2021-12-24

Family

ID=64320169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710340474.4A Active CN108874655B (en) 2017-05-15 2017-05-15 Method and device for processing crowdsourcing test data

Country Status (1)

Country Link
CN (1) CN108874655B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109324978B (en) * 2018-11-28 2022-05-24 北京精密机电控制设备研究所 Software test management system with multi-user cooperation
CN111291376B (en) * 2018-12-08 2023-05-05 深圳慕智科技有限公司 Web vulnerability verification method based on crowdsourcing and machine learning
CN109918308A (en) * 2019-03-13 2019-06-21 网易(杭州)网络有限公司 Test method and server based on crowdsourcing, storage medium
CN110096569A (en) * 2019-04-09 2019-08-06 中国科学院软件研究所 A kind of crowd survey personnel set recommended method
CN110222940B (en) * 2019-05-13 2023-06-23 西安工业大学 Crowdsourcing test platform tester recommendation algorithm
CN112398705B (en) * 2019-08-16 2022-07-22 中国移动通信有限公司研究院 Network quality evaluation method, device, equipment and storage medium
CN110708279B (en) * 2019-08-19 2021-08-13 中国电子科技网络信息安全有限公司 Vulnerability mining model construction method based on group intelligence
CN111770002B (en) * 2020-06-12 2022-02-25 南京领行科技股份有限公司 Test data forwarding control method and device, readable storage medium and electronic equipment
CN111966585B (en) * 2020-08-04 2024-02-02 建信金融科技有限责任公司 Execution method, device, equipment and system of test task
CN112988567B (en) * 2021-01-26 2022-02-15 广州番禺职业技术学院 Crowdsourcing test automated evaluation method and device
CN112817870A (en) * 2021-02-26 2021-05-18 北京小米移动软件有限公司 Software testing method, device and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708600A (en) * 2016-12-12 2017-05-24 大连理工大学 Multi-agent modeling and expert system-based device for generating optimal release policy of crowd-sourcing platform

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8195498B2 (en) * 2009-05-18 2012-06-05 Microsoft Corporation Modeling a plurality of contests at a crowdsourcing node
US8626545B2 (en) * 2011-10-17 2014-01-07 CrowdFlower, Inc. Predicting future performance of multiple workers on crowdsourcing tasks and selecting repeated crowdsourcing workers
US20140304833A1 (en) * 2013-04-04 2014-10-09 Xerox Corporation Method and system for providing access to crowdsourcing tasks
US9383976B1 (en) * 2015-01-15 2016-07-05 Xerox Corporation Methods and systems for crowdsourcing software development project
CN104579854B (en) * 2015-02-12 2018-01-09 北京航空航天大学 Mass-rent method of testing
CN106294118A (en) * 2015-06-12 2017-01-04 富士通株式会社 Messaging device and information processing method
CN106371840A (en) * 2016-08-30 2017-02-01 北京航空航天大学 Software development method and device based on crowdsourcing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708600A (en) * 2016-12-12 2017-05-24 大连理工大学 Multi-agent modeling and expert system-based device for generating optimal release policy of crowd-sourcing platform

Also Published As

Publication number Publication date
CN108874655A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108874655B (en) Method and device for processing crowdsourcing test data
US9886262B2 (en) Adaptive upgrade to computing systems
CN104185840B (en) It is used for being prioritized the mthods, systems and devices of multiple tests in lasting deployment streamline
US20150294256A1 (en) Scenario modeling and visualization
Jabbarvand et al. Search-based energy testing of android
CN105283851A (en) Cost analysis for selecting trace objectives
CN105122212A (en) Periodicity optimization in an automated tracing system
CN110830234B (en) User traffic distribution method and device
CN110147237A (en) A kind of redundant resource minimizing technology and device
CN109167816A (en) Information-pushing method, device, equipment and storage medium
CN109408309B (en) Multi-terminal testing method and device
CN109376419A (en) A kind of method, apparatus of data modeling, electronic equipment and readable medium
CN111045932A (en) Business system simulation processing method and device, electronic equipment and storage medium
Boers et al. Sampling and classifying interference patterns in a wireless sensor network
CN111815169A (en) Business approval parameter configuration method and device
CN104956353A (en) Automatic determination of device specific interoperability
CN113378067B (en) Message recommendation method, device and medium based on user mining
CN109040744B (en) Method, device and storage medium for predicting key quality index of video service
EP2799897A2 (en) Method, apparatus and computer program product for aligning floor plans in multi floor buildings
CN108291954A (en) The establishment system and its control method of wave condition map
CN111158881A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN109544241A (en) A kind of construction method of clicking rate prediction model, clicking rate predictor method and device
US11003825B1 (en) System, method, and computer program product for optimization in an electronic design
JP6690215B2 (en) Positioning program, positioning method, and positioning device
CN116149978A (en) Service interface testing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant