CN106959925B - Version testing method and device - Google Patents

Version testing method and device Download PDF

Info

Publication number
CN106959925B
CN106959925B CN201710279071.3A CN201710279071A CN106959925B CN 106959925 B CN106959925 B CN 106959925B CN 201710279071 A CN201710279071 A CN 201710279071A CN 106959925 B CN106959925 B CN 106959925B
Authority
CN
China
Prior art keywords
version
test
parallel
users
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710279071.3A
Other languages
Chinese (zh)
Other versions
CN106959925A (en
Inventor
蒋晓海
刘麒赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Testin Information Technology Co Ltd
Original Assignee
Beijing Testin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Testin Information Technology Co Ltd filed Critical Beijing Testin Information Technology Co Ltd
Priority to CN201710279071.3A priority Critical patent/CN106959925B/en
Publication of CN106959925A publication Critical patent/CN106959925A/en
Application granted granted Critical
Publication of CN106959925B publication Critical patent/CN106959925B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/368Test management for test version control, e.g. updating test cases to a new software version

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a version test method and device. The method comprises the following steps: distributing corresponding parallel versions to a plurality of test groups with the same number of test users; determining a version to be selected according to the test condition of each parallel version; adding new test users to the test group corresponding to the version to be selected so as to determine a new version to be selected again according to the test condition of each parallel version, and adding new test users to the test group corresponding to the new version to be selected until a cycle termination condition is met; and determining the parallel version corresponding to the test group with the largest number of test users as the optimal version. Compared with the existing method for determining the optimal version by collecting and analyzing the data at one time, the method has the advantages that the uncertainty of the user preference is fully considered, so that the determined optimal version is generally more accurate, and the problems in the prior art are solved.

Description

Version testing method and device
Technical Field
The present application relates to the field of product testing technologies, and in particular, to a version testing method and apparatus.
Background
In a rapidly developing network era, the acceptance degree of a user on products such as an application APP, a webpage and the like can often influence the success or failure of the products, and a testing link has an important influence on the acceptance degree of the products.
At present, in the version test process of a product, a plurality of parallel versions are often designed for the product, the parallel versions are delivered to corresponding test users, then the product use data of the parallel versions are collected and analyzed by the test users, and therefore the parallel version with the highest user acceptance degree is determined according to the analysis result.
However, due to uncertainty in user preferences, it is often difficult for such prior art techniques to accurately determine the most acceptable parallel version of the user.
Disclosure of Invention
The embodiment of the application provides a version test method and device, which are used for solving the problems in the prior art.
The embodiment of the application provides a version test method, which comprises the following steps:
distributing corresponding parallel versions to a plurality of test groups with the same number of test users;
determining a version to be selected according to the test condition of each parallel version;
adding new test users to the test group corresponding to the version to be selected so as to determine a new version to be selected again according to the test condition of each parallel version, and adding new test users to the test group corresponding to the new version to be selected until a cycle termination condition is met;
and determining the parallel version corresponding to the test group with the largest number of test users as the optimal version.
Preferably, determining the version to be selected according to the test condition of each parallel version specifically includes:
collecting the use data of the test users in each test group to the corresponding parallel version;
analyzing the usage data by hypothesis testing;
and determining the version to be selected according to the analysis result.
Preferably, the analyzing of the usage data by hypothesis testing specifically includes:
analyzing the use data through hypothesis testing, and respectively determining the lifting rate of each parallel version relative to other parallel versions;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the positive promotion rate relative to other parallel versions as the candidate version.
Preferably, the analyzing of the usage data by hypothesis testing specifically includes:
analyzing the use data through hypothesis testing, and respectively determining the statistical efficacy of each parallel version;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the statistical power meeting the statistical significance as the candidate version.
Preferably, the analyzing of the usage data by hypothesis testing specifically includes:
analyzing the use data through hypothesis testing, and respectively determining the upper limit and the lower limit of a confidence interval of each parallel version;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the upper limit and the lower limit of the confidence interval both being positive values as the version to be selected.
Preferably, the analyzing of the usage data by hypothesis testing specifically includes:
analyzing the use data through hypothesis testing, and respectively determining the P value of each parallel version;
determining the version to be selected according to the analysis result, specifically:
and determining the selected version according to the comparison between the P value and the significance level.
Preferably, the cycle termination condition is specifically any one of the following:
all test users are distributed to each test group;
and the test groups with the proportion of the number of the test users to the number of all the test users exceeding a preset threshold exist in each test group.
Preferably, before distributing the corresponding parallel versions to a plurality of test groups with the same number of test users, the method further includes:
determining a test index;
and creating a plurality of parallel versions according to the test indexes.
Preferably, adding a new test user to the test group corresponding to the version to be selected specifically includes:
and adding a preset number or a preset proportion of new testing users to the testing group corresponding to the to-be-selected version.
The embodiment of the present application further provides a version testing apparatus, and the apparatus includes: an allocation unit, a first determination unit, an addition unit, and a second determination unit, wherein:
the distribution unit is used for respectively distributing corresponding parallel versions to a plurality of test groups with the same number of test users;
the first determining unit is used for determining the version to be selected according to the test condition of each parallel version;
the adding unit is used for adding new testing users to the testing group corresponding to the to-be-selected version so as to determine a new to-be-selected version according to the testing condition of each parallel version, and adding new testing users to the testing group corresponding to the new to-be-selected version until a cycle termination condition is met;
and the second determining unit is used for determining the parallel version corresponding to the test group with the largest number of test users as the optimal version.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
the version testing method provided by the embodiment of the application allocates the corresponding parallel versions to the plurality of testing groups respectively, wherein the testing users of each testing group are same in number, the version to be selected is determined according to the testing condition of each parallel version, new testing users are added to the version to be selected, then the new version to be selected is determined again, and the testing users are added to the new version to be selected, so that circulation is performed, and the parallel version corresponding to the testing group with the largest number of testing users is determined as the optimal version until the circulation termination condition is met. Compared with the prior art that the optimal version is determined by acquiring and analyzing the use data at one time, uncertainty of user preference is fully considered, the determined optimal version can be generally accepted by the user more, and the problems in the prior art are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart illustrating a specific implementation of a version testing method according to an embodiment of the present application;
fig. 2 is a specific example of a version testing method provided in an embodiment of the present application in practical application;
fig. 3 is a schematic structural diagram of a version testing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The main idea of the present invention is to divide all test users into two parts (called as a first part and a second part respectively), wherein the test users in the first part are divided into a plurality of test groups on average, the number of test users in each test group is the same, and the number of test groups can be the same as the number of parallel versions; each test group corresponds to one parallel version, the parallel version with the optimal performance can be determined as a candidate version according to the use condition of the test user corresponding to the parallel version in each test group, and part of test users in the second part are distributed to the test groups of the candidate version as rewards; and then, continuously determining a new version to be selected according to the performance of each parallel version, rewarding the version to be selected until the number of the test users of each side array is counted when a termination condition is met, and taking the parallel version corresponding to the test group with the largest number of the test users as the optimal version. Through the continuous mode that the to-be-selected version is determined according to the performance of each parallel version and rewarded, the optimal version is determined through multiple cycles, and compared with the prior art, the parallel version with higher user acceptance can be determined generally.
Example 1
Embodiment 1 provides a version testing method, which can be used to solve the problem that it is difficult to accurately determine a parallel version with the highest user acceptance in the prior art. The specific flow diagram of the method is shown in fig. 1, and the method comprises the following steps:
step S11: and determining a test index.
Step S12: and creating a plurality of parallel versions according to the test indexes.
Here, the steps S11 and S12 may be collectively explained.
The test index can usually reflect the improvement point of the product to be released, for example, compared with the original version, the color of the product needs to be improved, and the color can be used as the test index to design a plurality of parallel versions of the product, wherein the color of each parallel version of the product is different; if the page layout of the original version needs to be improved, the page layout can be used as a test index to design a plurality of parallel versions of products, wherein the page layouts of the parallel versions of the products are different; of course, the color and the page layout can also be improved at the same time, so that the color and the page layout are simultaneously used as test indexes to design corresponding parallel versions, and the color and/or the page layout of the parallel versions are not the same.
If other aspects of the original version need to be improved in practical application, corresponding test indexes can be determined according to the improvement points; generally, a new version may have various improvement points compared with the original version, and these improvement points may be used as test indexes to be tested, so as to determine multiple parallel versions respectively.
Step S13: and respectively distributing corresponding parallel versions to a plurality of test groups with the same number of test users.
The plurality of test groups are at least two test groups, where the number of test users in each test group is the same, it should be noted that the number of test users in each test group is the same, the number of test users in each test group may be absolutely the same, for example, 50 test users are all provided, or the number of test users in each test group is substantially the same, the substantially the same may be that the difference between the test users in each test group is within an error range, generally, when the number of test users in each test group is large, the difference between the users with a small number may be substantially the same, for example, 1000 test users are provided in test group a, 1005 is provided in test group B, and then the test users in test group a and test group B may be substantially the same.
In addition, the number of test users in each test group is the same, or there may be test users in each test group with the same proportion of all test users, for example, there are test users of 5% (or other values) of all test users in each test group, that is, the number of test users in each test group is 5% of all test users.
In practical application, a corresponding number of test groups may be determined according to the number of parallel versions, where the number of test users in each test group is the same, and it should be noted that, at this time, the sum of the number of test users in each test group is usually smaller than the number of all test users, that is, all test users are divided into two parts, where one part is evenly allocated to each test group, and the other part is not allocated to a test group at this time.
After the corresponding parallel versions are respectively allocated to the test groups, the test users of different test groups use different parallel versions, and the test users of the same test group use the same parallel version.
For example, for the test index (color), A, B and C are created as three different parallel versions, a parallel version a is assigned to test group 1, B parallel version B is assigned to test group 2, and C parallel version C is assigned to test group 3, where each of test group 1, test group 2, and test group 3 has 5% (of all test users), and another 85% of test users may not participate in the test temporarily, and still use the original version.
Step S14: and determining the version to be selected according to the test condition of each parallel version.
Usually, the candidate version is determined according to the test condition of each parallel version, the use data of the test user in each test group to the corresponding parallel version may be collected first, then the use data is analyzed by a statistical method such as hypothesis test, and the candidate version is determined according to the analysis result.
In practical application, the parallel version with the optimal performance needs to be determined as a candidate version; according to different specific products, for example, the product is an Application APP (Application) or a webpage, the parallel version which represents the optimal performance usually has different embodiments, for example, the optimal performance for the Application APP can be embodied in the mode of maximum download times, maximum login times, maximum use time and the like; the optimal webpage performance can be embodied in that the browsing times are the most, the staying time on the page is the longest, and the like; of course, for other products, there may be corresponding evaluation criteria to determine whether the performance is excellent. Therefore, according to different specific products, the collected usage data is different, for example, when the product is an APP, the usage data may be download times, login times, and the like, and when the product is a web page, the usage data may be web page browsing times and the like. The general usage data can reflect usage of the corresponding parallel version by the test users in the test group.
After the usage data is collected, the usage data may be analyzed by a statistical method such as hypothesis testing, so as to determine the candidate version according to the analysis result, but the candidate version may also be determined by other methods.
This can be specifically explained by taking a hypothesis test as an example. Hypothesis testing can generally be used to determine whether sample-to-sample differences or sample-to-population differences are caused by sampling errors or substantial differences. The rationale is to make certain assumptions about the characteristics of the population and then to infer whether the assumptions should be rejected or accepted through sampling studies and statistical reasoning.
When the candidate version in each parallel version is determined through hypothesis testing, each parallel version can be respectively assumed as the candidate version, and then the correctness of the hypothesis is verified through analyzing the collected use data. For example, each parallel version is sequentially used as a current parallel version, the current parallel version is assumed to be a candidate version, and whether the assumption is correct or not is verified by analyzing the use data of each parallel version.
Of course, when the candidate version in each parallel version is determined by hypothesis testing, it may also be assumed that each parallel version is a non-candidate version, and then the correctness of the hypothesis is verified by analyzing the collected usage data. For example, each parallel version is sequentially used as a current parallel version, and assuming that the current parallel version is a non-candidate version (i.e., assuming that the current parallel version is not a candidate version), whether the assumption is correct or not is verified by analyzing the usage data of each parallel version. During hypothesis testing, the hypothesized conclusions may be set as desired.
In the following, it is assumed that each parallel version is a non-candidate version (referred to as an original hypothesis), and the correctness of the conclusion of the original hypothesis is verified through analyzing the usage data, and several verification methods can be listed below:
in the first mode, a significance level (α) can be given first in the verification process, and then a P Value (P Value) of the current parallel version is determined, when the P Value is smaller than the significance level, the original assumption is not true, and the current parallel version is a candidate version, wherein the P Value is the probability of occurrence of sample observation results or more extreme results obtained when the original assumption is true.
At this time, in this way of determining the candidate version by comparing the P value with the significance level, the P values of the parallel versions may be determined respectively, and then the parallel version having a P value smaller than the significance level may be determined as the candidate version.
And secondly, verifying the conclusion of the original hypothesis by respectively determining the statistical power of each parallel version and by the size of the statistical power. When the statistical power of a certain parallel version meets the statistical significance, the parallel version can be determined as a candidate version. Where statistical power (statistical power) refers to the probability of accepting a correct replacement hypothesis after rejecting the original hypothesis in a hypothesis test.
In a third mode, the confidence intervals of the parallel versions can be respectively determined through hypothesis testing, wherein the confidence intervals can comprise an upper limit and a lower limit; when the upper limit and the lower limit of the confidence interval of a certain parallel version are both positive values, the original assumption is rejected, that is, the parallel version is the version to be selected, so that the parallel version of which the upper limit and the lower limit of the confidence interval are both positive values can be determined as the version to be selected.
Wherein the confidence interval refers to an estimation interval of the overall parameter constructed by the sample statistics. In statistics, the Confidence interval (Confidence interval) of a probability sample is an interval estimate for some overall parameter of this sample. The confidence interval exhibits the extent to which the true value of this parameter has a certain probability of falling around the measurement.
In the verification process, the lifting rate of each parallel version relative to other parallel versions can be respectively determined, if the lifting rate of a certain parallel version relative to other parallel versions is a positive value, the original assumption is rejected, and the parallel version is a version to be selected; the parallel version whose promotion rate is positive with respect to the other parallel versions can be determined as the candidate version.
Step S15: and adding new testing users to the testing group corresponding to the candidate version so as to determine a new candidate version according to the testing condition of each parallel version, and adding new testing users to the testing group corresponding to the new candidate version until a cycle termination condition is met.
After determining the candidate version, a "reward" may be given to the test group corresponding to the candidate version, that is, a new test user may be added to the test group. For example, the test users of the test group 1, the test group 2, and the test group 3 use the parallel version a, the parallel version B, and the parallel version C, respectively, and after the parallel version B is determined as the candidate version, new test users may be added to the test group 2, and certainly, the new test users added to the test group 2 may be new test users added to the test group 2 by a preset number or a preset ratio. The preset number refers to a number of new testing users (for example, 500) which increase the quantity of new testing users to the optimal version to be selected each time; the preset proportion refers to that a certain proportion of test users (for example, 5% of test users are added each time) in the total number of test users are added to the optimal version to be selected.
After adding a new test user to the test group corresponding to the to-be-selected version, it may be determined whether a loop termination condition is satisfied, and when the loop termination condition is satisfied, terminate the loop and execute step S16; when the loop termination condition is not met, determining a new candidate version again according to the test condition of each parallel version (determining the candidate version again), wherein at the moment, a new test user is added to the test group corresponding to the original candidate version; after the new candidate version is determined again, rewarding is performed on the test group corresponding to the new candidate version, that is, new test users are added to the test group corresponding to the new candidate version, so that the "rewarding" is performed on the candidate version determined in a circulating manner until a circulation termination condition is met, the circulation is terminated, and the parallel version corresponding to the test group with the largest number of test users can be determined as the optimal version after the circulation is terminated.
Of course, in the process of determining the candidate version circularly, the new candidate version may be determined in the same manner as in step S14; for the test group corresponding to the determined new version to be selected, a preset number or a preset proportion of new test users may also be added, which is not described herein again.
It should be noted that, in practical applications, the cycle termination condition may be various, for example, when all the test users have been assigned to each test group, the new candidate version may be terminated and re-determined and the more corresponding test groups are rewarded, so that all the test users have been assigned to each test group as the cycle termination condition; or, when the ratio of the number of test users in a certain test group to the total number of test users exceeds a preset threshold, the loop may be terminated, that is, a test group in which the ratio of the number of test users to the total number of test users in each test group exceeds a preset threshold may be used as a loop termination condition, where the preset threshold may be set to 50% (or other values); when the number of test users in a certain test group accounts for more than 50% of the total number of test users, it indicates that the number of test users in the test group accounts for most, and the loop may be terminated.
The parallel version corresponding to the test group with the largest number of test users is determined as the optimal version until the termination condition is met by determining the candidate version, rewarding the test group of the candidate version, re-determining the candidate version and re-rewarding the test group of the candidate version, fully considering the uncertainty of the user preference, and the determined optimal version can be generally accepted by the users better.
Step S16: and determining the parallel version corresponding to the test group with the largest number of test users as the optimal version.
After the loop termination condition is met, the loop is terminated, and at this time, the number of the test users in each test group may be determined, and the parallel version corresponding to the test group with the largest number of test users may be determined as the optimal version. Because the number of the test users in each test group is the same initially, the optimal version is determined according to the number of the test users after the cycle is terminated by continuously determining the version to be selected and adding new test users to the version to be selected, and the mode can fully reflect the acceptance of the test users to each parallel version, the determined optimal version is generally higher in acceptance.
Of course, after determining the optimal version, the optimal version may also be released.
By using the method provided in embodiment 1, corresponding parallel versions are respectively allocated to a plurality of test groups, where the number of test users of each test group is the same, a candidate version is determined according to the test condition of each side array, a new test user is added to the candidate version, then a new candidate version is determined again according to the test condition of each side array, and a test user is added to the new candidate version, so as to perform a loop, and the parallel version corresponding to the test group with the largest number of test users is determined as the optimal version until a loop termination condition is satisfied. Due to the mode of determining the candidate version, adding the test users to the test group corresponding to the candidate version, re-determining the new candidate version and adding the test users to the test group corresponding to the new candidate version, compared with the mode of acquiring and analyzing the use data at one time to determine the optimal version in the prior art, the uncertainty of user preference is fully considered, so that the determined optimal version can be generally accepted by the user more, and the problem in the prior art is solved.
In addition, in the prior art, after each parallel version is delivered to a corresponding test group, product use data of the parallel version by a test user is collected within a period of time (for example, 2 to 4 weeks), and the use data is uniformly analyzed after the data collection is completed.
According to the method, after the corresponding parallel versions of products are respectively distributed to a plurality of test groups with the same number of test users, the use data of the test users in each test group to the corresponding parallel versions can be collected in real time, the versions to be selected are determined through real-time hypothesis testing analysis, and then the use data and the data analysis are continuously collected.
It should be noted that the method provided in embodiment 1 may be generally implemented by a server, and the execution subjects of the steps may all be the same device of the server, or different devices of the server may also be used as the execution subjects of the steps of the method. For example, the execution subjects of step S11 and step S12 may be the apparatus 1; for another example, the execution subject of step S11 may be device 1, and the execution subjects of steps S12 and S2; and so on.
The foregoing is a detailed description of the methods provided herein, and for ease of understanding, the following may illustrate specific examples for further explanation. As shown in fig. 2, the steps of this example are as follows:
step S21: creating parallel versions A, B and C for the trial metrics;
step S22: parallel version a is assigned to test group 1, parallel version B is assigned to test group 2, and parallel version C is assigned to test group 3, where test group 1, test group 2, and test group 3 are all 5% of test users.
Of all test users, the remaining 85% of test users, which were not involved in the test for a while, may still use the original version.
Step S23: collecting the use data of the parallel versions A, B and C of the test users of the test group 1, the test group 2 and the test group 3 respectively;
step S24: analyzing the collected usage data by hypothesis testing;
step S25: determining a version to be selected according to the analysis result;
step S26: adding 5% of test users to the test group corresponding to the version to be selected;
for example, if the candidate version determined in step S25 is the parallel version a, step S26 is to add 5% of test users to the test group 1 corresponding to the parallel version a.
Step S27: judging whether the circulation termination condition is met, if not, executing the step S23 again, and if so, executing the step S28;
the loop termination condition may be that all test users have been allocated to each test group, or that the number of test users in a certain test group exceeds 50% of the total number of test users.
Step S28: and determining the parallel version corresponding to the test group with the largest number of test users as the optimal version.
Of course, in practical applications, after the optimal version is determined, the optimal version may be released.
Example 2
Based on the same inventive concept as embodiment 1, embodiment 2 provides a version test apparatus that can be used to solve the problems in the prior art. As shown in fig. 3, the apparatus 30 includes: an assigning unit 301, a first determining unit 302, an adding unit 303 and a second determining unit 304, wherein:
the allocating unit 301 is configured to allocate corresponding parallel versions to a plurality of test groups with the same number of test users;
a first determining unit 302, configured to determine a version to be selected according to a test condition of each parallel version;
an adding unit 303, configured to add a new test user to the test group corresponding to the candidate version, so as to determine a new candidate version according to the test condition of each parallel version again, and add a new test user to the test group corresponding to the new candidate version until a cycle termination condition is met;
a second determining unit 304, configured to determine the parallel version corresponding to the test group with the largest number of test users as the optimal version
With the device 30 provided in embodiment 2 of the present application, since the device 30 adopts the same inventive concept as embodiment 1, the problems in the prior art can also be solved, and details are not repeated here. In addition, in practical applications, the apparatus 30 may be combined with specific hardware devices, so as to obtain other technical effects, for example, each unit of the apparatus 30 may be respectively disposed in different devices of a distributed server, and the efficiency of the version test may be further improved generally by performing product release through cooperation of the different devices.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A version test method, comprising:
distributing corresponding parallel versions to a plurality of test groups with the same number of test users; the sum of the number of the test users in each test group is less than the number of all the test users, all the test users are divided into two parts, wherein one part is evenly distributed to each test group, and the other part is not distributed to the test groups at the moment;
acquiring the use data of the test users in each test group corresponding to the parallel versions in real time, performing real-time hypothesis test analysis to determine the version to be selected, then continuing to acquire the use data and analyze the data, fusing the acquisition process of the use data and the analysis process of the hypothesis test, and simultaneously performing the last hypothesis test analysis when the next use data is acquired;
adding new test users to the test group corresponding to the to-be-selected version, judging whether a cycle termination condition is met, and terminating the cycle when the cycle termination condition is met; when the loop termination condition is not met, determining a new candidate version again according to the test condition of each parallel version, wherein at the moment, a new test user is added to the test group corresponding to the original candidate version, and after the new candidate version is determined again, adding a new test user to the test group corresponding to the new candidate version, so that the candidate version is determined in a loop mode all the time, the new test user is added to the candidate version, and the loop is not terminated until the loop termination condition is met;
determining the parallel version corresponding to the test group with the largest number of test users as the optimal version;
wherein, the cycle termination condition is any one of the following conditions:
all test users are distributed to each test group;
and the test groups with the proportion of the number of the test users to the number of all the test users exceeding a preset threshold exist in each test group.
2. The method of claim 1, wherein determining the new candidate version based on the test condition of each parallel version comprises:
collecting the use data of the test users in each test group to the corresponding parallel version;
analyzing the usage data by hypothesis testing;
and determining the version to be selected according to the analysis result.
3. The method of claim 2, wherein analyzing the usage data by hypothesis testing comprises:
analyzing the use data through hypothesis testing, and respectively determining the lifting rate of each parallel version relative to other parallel versions;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the positive promotion rate relative to other parallel versions as the candidate version.
4. The method of claim 2, wherein analyzing the usage data by hypothesis testing comprises:
analyzing the use data through hypothesis testing, and respectively determining the statistical efficacy of each parallel version;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the statistical power meeting the statistical significance as the candidate version.
5. The method of claim 2, wherein analyzing the usage data by hypothesis testing comprises:
analyzing the use data through hypothesis testing, and respectively determining the upper limit and the lower limit of a confidence interval of each parallel version;
determining the version to be selected according to the analysis result, specifically:
and determining the parallel version with the upper limit and the lower limit of the confidence interval both being positive values as the version to be selected.
6. The method of claim 2, wherein analyzing the usage data by hypothesis testing comprises:
analyzing the use data through hypothesis testing, and respectively determining the P value of each parallel version;
determining the version to be selected according to the analysis result, specifically:
determining the selected version according to the comparison between the P value and the significance level;
and when the P value is smaller than the significance level, the original assumption is not true, and the current parallel version is the version to be selected.
7. The method of claim 1, wherein before assigning the corresponding parallel versions to a plurality of test groups having the same number of test users, respectively, the method further comprises:
determining a test index;
and creating a plurality of parallel versions according to the test indexes.
8. The method of claim 1, wherein adding new test users to the test group corresponding to the candidate version specifically comprises:
and adding a preset number or a preset proportion of new testing users to the testing group corresponding to the to-be-selected version.
9. A version test apparatus, comprising: an allocation unit, a first determination unit, an addition unit, and a second determination unit, wherein:
the distribution unit is used for respectively distributing corresponding parallel versions to a plurality of test groups with the same number of test users; the sum of the number of the test users in each test group is less than the number of all the test users, all the test users are divided into two parts, wherein one part is evenly distributed to each test group, and the other part is not distributed to the test groups at the moment;
the first determining unit is used for acquiring the use data of the test users in each test group corresponding to the parallel versions in real time, performing real-time hypothesis testing analysis to determine the version to be selected, then continuing to acquire the use data and analyze the data, fusing the acquisition process of the use data and the analysis process of the hypothesis testing, and simultaneously performing the last hypothesis testing analysis when the next use data is acquired;
the adding unit is used for adding new testing users to the testing group corresponding to the to-be-selected version, judging whether a cycle termination condition is met or not, and terminating the cycle when the cycle termination condition is met; when the loop termination condition is not met, determining a new candidate version again according to the test condition of each parallel version, wherein at the moment, a new test user is added to the test group corresponding to the original candidate version, and after the new candidate version is determined again, adding a new test user to the test group corresponding to the new candidate version, so that the candidate version is determined in a loop mode all the time, the new test user is added to the candidate version, and the loop is not terminated until the loop termination condition is met;
the second determining unit is used for determining the parallel version corresponding to the test group with the largest number of test users as the optimal version;
wherein, the cycle termination condition is any one of the following conditions:
all test users are distributed to each test group;
and the test groups with the proportion of the number of the test users to the number of all the test users exceeding a preset threshold exist in each test group.
CN201710279071.3A 2017-04-25 2017-04-25 Version testing method and device Expired - Fee Related CN106959925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710279071.3A CN106959925B (en) 2017-04-25 2017-04-25 Version testing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710279071.3A CN106959925B (en) 2017-04-25 2017-04-25 Version testing method and device

Publications (2)

Publication Number Publication Date
CN106959925A CN106959925A (en) 2017-07-18
CN106959925B true CN106959925B (en) 2020-06-30

Family

ID=59485023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710279071.3A Expired - Fee Related CN106959925B (en) 2017-04-25 2017-04-25 Version testing method and device

Country Status (1)

Country Link
CN (1) CN106959925B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480056B (en) * 2017-07-31 2023-04-07 北京云测信息技术有限公司 Software testing method and device
CN108415845B (en) * 2018-03-28 2019-05-31 北京达佳互联信息技术有限公司 Calculation method, device and the server of AB test macro index confidence interval
CN108874660A (en) * 2018-05-03 2018-11-23 北京奇虎科技有限公司 A kind of application testing method and device
CN109299014B (en) * 2018-09-28 2021-10-08 北京云测信息技术有限公司 Method for automatically adjusting flow in version test
CN109120720A (en) * 2018-09-28 2019-01-01 北京云测信息技术有限公司 A method of automatic adjustment version tests flow
CN111950821B (en) * 2019-05-15 2023-07-25 腾讯科技(深圳)有限公司 Test method, device and server
CN111400656A (en) * 2020-03-11 2020-07-10 中国标准化研究院 Method and device for judging use quality or performance of product
CN111708689A (en) * 2020-05-19 2020-09-25 北京奇艺世纪科技有限公司 Method and device for modifying AB experiment and electronic equipment
CN113268414A (en) * 2021-05-10 2021-08-17 Oppo广东移动通信有限公司 Distribution method and device of experimental versions, storage medium and computer equipment
CN114390105A (en) * 2022-03-01 2022-04-22 阿里巴巴(中国)有限公司 Enterprise user distribution method and device based on test

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008074529A2 (en) * 2006-12-21 2008-06-26 International Business Machines Corporation Method, system and computer program for performing regression tests
US20130006568A1 (en) * 2010-06-03 2013-01-03 International Business Machines Corporation Test Operation
CN102095572A (en) * 2010-12-06 2011-06-15 广州市熠芯节能服务有限公司 Product performance test method based on benchmarking product comparison
US8732665B2 (en) * 2011-06-28 2014-05-20 Microsoft Corporation Deploying environments for testing by providing instantaneous availability of prebuilt environments
CN102222043B (en) * 2011-07-08 2015-06-17 华为软件技术有限公司 Testing method and testing device
CN102902619B (en) * 2011-07-29 2015-09-09 阿里巴巴集团控股有限公司 The regression testing method of web application and device
CN103324566B (en) * 2012-03-20 2016-04-06 阿里巴巴集团控股有限公司 A kind of multi-version test method and device
CN104102576A (en) * 2013-04-12 2014-10-15 阿里巴巴集团控股有限公司 Multi-version test method and device
CN105740137B (en) * 2014-12-08 2018-07-31 阿里巴巴集团控股有限公司 Divide bucket test method and the method, apparatus and system of configuration information are provided

Also Published As

Publication number Publication date
CN106959925A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106959925B (en) Version testing method and device
CN106294120B (en) Method, apparatus and computer program product for testing code
US20210034407A1 (en) Virtual Machine Scheduling Method and Apparatus
US10178042B2 (en) System, method, and apparatus for computer system resource allocation
CN109583594B (en) Deep learning training method, device, equipment and readable storage medium
US10411969B2 (en) Backend resource costs for online service offerings
CN110019298B (en) Data processing method and device
CN108616553B (en) Method and device for resource scheduling of cloud computing resource pool
CN110532187B (en) HDFS throughput performance testing method, system, terminal and storage medium
CN107220165A (en) The method and apparatus of pressure test on a kind of artificial line
CN110377519B (en) Performance capacity test method, device and equipment of big data system and storage medium
CN109992408B (en) Resource allocation method, device, electronic equipment and storage medium
CN107395447B (en) Module detection method, system capacity estimation method and corresponding equipment
CN110928636A (en) Virtual machine live migration method, device and equipment
CN109471015B (en) Method and system for making chip product test specification
CN116450483A (en) Method, device, server and medium for determining load of software distribution
CN116302874A (en) Model capability test method and device, electronic equipment, storage medium and product
CN111984519A (en) Test method and device for service system
CN115994029A (en) Container resource scheduling method and device
CN116361631A (en) Method and equipment for detecting time sequence data period, detecting abnormality and scheduling resources
CN115134399B (en) User identification method and device
CN107122303B (en) Test method and device
CN114219468A (en) Micro-service charging method and device based on private container cloud and related components
CN111967938A (en) Cloud resource recommendation method and device, computer equipment and readable storage medium
CN112070349A (en) Order allocation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200630