CN112905463A - Software test monitoring method and device, electronic equipment and readable storage medium - Google Patents
Software test monitoring method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN112905463A CN112905463A CN202110177312.XA CN202110177312A CN112905463A CN 112905463 A CN112905463 A CN 112905463A CN 202110177312 A CN202110177312 A CN 202110177312A CN 112905463 A CN112905463 A CN 112905463A
- Authority
- CN
- China
- Prior art keywords
- preset
- user cluster
- software
- threshold value
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012544 monitoring process Methods 0.000 title claims abstract description 49
- 238000012360 testing method Methods 0.000 claims abstract description 86
- 238000004364 calculation method Methods 0.000 claims abstract description 33
- 230000006872 improvement Effects 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 16
- 238000007689 inspection Methods 0.000 claims abstract description 14
- 230000008859 change Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000012806 monitoring device Methods 0.000 claims description 5
- 238000013522 software testing Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000002474 experimental method Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 4
- 230000002457 bidirectional effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The embodiment of the application provides a software test monitoring method and device, electronic equipment and a readable storage medium, and relates to the technical field of testing. The method comprises the following steps: the method comprises the steps of obtaining a first frequency of target events when a first user cluster uses test version software, and obtaining a second frequency of the target events when a second user cluster uses reference version software, wherein the target events are negative events influencing user experience; calculating according to the first times and the second times to obtain a time difference value; judging whether the time difference reaches a first threshold value, wherein the first threshold value is obtained by calculation through a simple sequential inspection algorithm according to a preset correct rate, a preset recall rate and a preset minimum improvement degree; and if so, generating alarm information, wherein the alarm information is used for prompting to stop the test. Therefore, when the test is risky, the alarm information can be generated in time, so that the negative influence is guaranteed to be within the bearing range of the user through monitoring, and the user experience is prevented from being greatly injured.
Description
Technical Field
The present application relates to the field of test technologies, and in particular, to a software test monitoring method and apparatus, an electronic device, and a readable storage medium.
Background
In internet analysis, the a/B test (otherwise known as the bucket test or split test) is a random experiment, usually with two variants, a and B, and comparing A, B data to draw experimental conclusions, with the control variable method keeping a single variable. In short, two schemes are formulated for the same target, one part of users use the scheme A, the other part of users use the scheme B, the use conditions of the users are recorded, and the scheme is more consistent with the design target.
The online test may involve a large number of users in the experiment, which may significantly impair the user experience due to the negative effects of the variables. To avoid this situation, the risk needs to be guaranteed to be within the tolerance range by monitoring the test. Therefore, how to monitor the test so as to correctly indicate the risk of the test (i.e. the risk is about to be out of the tolerance range of the test user) has become a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a software test monitoring method and device, electronic equipment and a readable storage medium, which can generate alarm information in time when a test is risky, so that negative effects are guaranteed to be within a bearing range of a user through monitoring, and user experience is prevented from being greatly injured.
The embodiment of the application can be realized as follows:
in a first aspect, an embodiment of the present application provides a software test monitoring method, including:
the method comprises the steps of obtaining a first frequency of target events when a first user cluster uses test version software, and obtaining a second frequency of the target events when a second user cluster uses reference version software, wherein the target events are negative events influencing user use experience;
calculating according to the first times and the second times to obtain a time difference value;
judging whether the time difference reaches a first threshold value, wherein the first threshold value is obtained by calculation through a simple sequential inspection algorithm according to a preset correct rate, a preset recall rate and a preset minimum improvement degree;
and if the time difference reaches the first threshold value, generating alarm information, wherein the alarm information is used for prompting to stop using the test version software in the first user cluster and/or prompting to stop using the reference version software in the second user cluster.
In a second aspect, an embodiment of the present application provides an experiment monitoring apparatus, including:
the data acquisition module is used for acquiring a first frequency of target events when a first user cluster uses the test version software and acquiring a second frequency of the target events when a second user cluster uses the reference version software, wherein the target events are negative events influencing the user experience;
the calculating module is used for calculating according to the first times and the second times to obtain a time difference value;
the judging module is used for judging whether the time difference reaches a first threshold value, wherein the first threshold value is obtained by calculation according to a preset correct rate, a preset recall rate and a preset minimum improvement degree through a simple sequential inspection algorithm;
and the processing module is used for generating alarm information when the time difference reaches the first threshold, wherein the alarm information is used for prompting to stop using the test version software in the first user cluster and/or prompting to stop using the reference version software in the second user cluster.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the processor can execute the machine executable instructions to implement the software test monitoring method described in any one of the foregoing embodiments.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the software test monitoring method according to any one of the foregoing embodiments.
The embodiment of the application provides a software test monitoring method, a software test monitoring device, electronic equipment and a readable storage medium, and a first frequency of a target event occurring when a first user cluster uses a test version software and a second frequency of the target event occurring when a second user cluster uses a reference version software are obtained, wherein the target event is a negative event influencing user experience; then judging whether the time difference calculated according to the first time and the second time reaches a first threshold value calculated according to a preset accuracy, a preset recall rate and a preset minimum improvement degree through a simple sequential inspection algorithm; and if so, generating alarm information for prompting to stop the test. Therefore, whether the test is risky or not can be judged by combining a simple sequential inspection algorithm, and the alarm information is generated when the test is risky, so that the test staff can conveniently take corresponding measures in time, and the test is prevented from greatly hurting the user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a software testing monitoring method according to an embodiment of the present application;
fig. 3 is a second schematic flowchart of a software testing monitoring method according to an embodiment of the present application;
fig. 4 is a third schematic flowchart of a software testing monitoring method according to an embodiment of the present application;
fig. 5 is a block diagram illustrating a software testing apparatus according to an embodiment of the present disclosure.
Icon: 100-an electronic device; 110-a memory; 120-a processor; 130-a communication unit; 200-software test monitoring device; 210-a data acquisition module; 220-a calculation module; 230-a judgment module; 240-processing module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may be, but is not limited to, a computer, a server, etc. The electronic device 100 includes a memory 110, a processor 120, and a communication unit 130. The elements of the memory 110, the processor 120 and the communication unit 130 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The memory 110 is used to store programs or data. The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions. For example, the memory 110 stores a software test monitoring apparatus 200, and the software test monitoring apparatus 200 includes at least one software functional module which can be stored in the memory 110 in the form of software or firmware (firmware). The processor 120 executes various functional applications and data processing by running software programs and modules stored in the memory 110, such as the software testing monitoring apparatus 200 in the embodiment of the present application, so as to implement the software testing monitoring method in the embodiment of the present application.
The communication unit 130 is used for establishing a communication connection between the electronic apparatus 100 and another communication terminal via a network, and for transceiving data via the network.
It should be understood that the structure shown in fig. 1 is only a schematic structural diagram of the electronic device 100, and the electronic device 100 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2, fig. 2 is a flowchart illustrating a software testing monitoring method according to an embodiment of the present disclosure. The method may be applied to the electronic device 100. The specific flow of the software testing and monitoring method is described in detail below. The method may include steps S120 to S150.
Step S120, a first frequency of the target events of the first user cluster when the first user cluster uses the software of the test version is obtained, and a second frequency of the target events of the second user cluster when the second user cluster uses the software of the reference version is obtained.
The test version software and the reference version software are two schemes designed for the same target. Wherein the test version software is used as an experimental group, and the reference version software is used as an unintervention control group. Optionally, the test version software and the reference version software may be two versions of software corresponding to the same software under the condition that the switch functions are different, that is, whether the function is opened to the user can be dynamically controlled for the same software, so that the two versions of software are obtained as the test version software and the reference version software. The test version software and the reference version software may be APP (Application), a web page, or other files to be tested. It should be understood that the above description is only an example, and the test version software and the reference version software may be determined according to actual requirements.
During testing, test data can be collected in real time. The test data may include a first number of times that a first user cluster has occurred a target event while using the test version software, and a second number of times that a second user cluster has occurred the target event while using the benchmark version software. The first user cluster is a user cluster using the test version software, namely, users participating in the test of the test version software; the second user cluster is a user cluster using the reference version software, namely, users participating in the test of the reference version software. The target event is a negative event affecting the user experience, and may be specifically determined according to a negative event concerned by the test, such as a running event.
And step S130, calculating according to the first times and the second times to obtain a time difference value.
In the case where the first frequency and the second frequency are obtained, the frequency difference may be calculated based on the first frequency and the second frequency according to a calculation manner corresponding to the target concerned in the test.
Step S140, determining whether the time difference reaches a first threshold.
The first threshold is obtained by calculation through a simple sequential inspection algorithm according to a preset correct rate, a preset recall rate and a preset minimum improvement degree. The simple sequential inspection algorithm can be used for calculating the times of stopping experiments under the set accuracy and recall rate, and the algorithm can reduce the times of experiments. And under the condition that the time difference is calculated, judging whether the time difference reaches a first threshold value or not, and determining whether the current existing risk is within the bearing range of a tester (namely the tester participating in the online test).
And if the time difference value does not reach the first threshold value, the current risk is considered to be in the bearing range of the personnel participating in the test, and no alarm is needed, namely no risk is prompted to be out of the bearing range.
If the time difference reaches the first threshold, step S150 is executed.
And step S150, generating alarm information.
If the number difference reaches the first threshold, it may be considered that the current risk is not within the tolerance range of the tester, and the test may be continued at risk, that is, the user experience may be greatly impaired by the continued test. In this case, alarm information is generated. The alarm information is used for prompting that greater risks exist in the continuous test, and the user experience can be greatly injured. The warning message may be used to prompt to stop using the test version software in the first user cluster and/or prompt to stop using the reference version software in the second user cluster, i.e., prompt to stop testing. Therefore, the staff who tests can be correctly prompted to test at risk.
Optionally, monitoring may be stopped after the alert information is generated. After checking the alarm information, the testing staff can stop testing and then adjust the software of the test version and/or the software of the reference version; or continuing, and the specific processing mode can be determined according to the actual requirement.
The online data is full of random, and real-time and accurate monitoring is difficult. The experiment monitoring is an experiment scene, and the accuracy and the recall rate can be controlled by a proper hypothesis testing method if the control group information without intervention exists. When hypothesis testing is applied to a monitoring scene, the problem of real-time detection needs to be solved, in the embodiment of the application, a simple sequential testing algorithm is adopted, a first threshold value is obtained through calculation according to a preset accuracy, a preset recall rate and a preset minimum improvement degree, and when a difference value of times calculated according to a first time of a first user cluster generating a target event when using test version software and a second time of a second user cluster generating the target event when using standard version software reaches the first threshold value, alarm information for prompting the existence of a risk is generated. Therefore, whether the test is risky or not can be judged by combining a simple sequential inspection algorithm, and the alarm information is generated when the test is risky, so that the test staff can conveniently take corresponding measures in time, and the test is prevented from greatly hurting the user experience.
As a possible implementation manner, please refer to fig. 3, and fig. 3 is a second schematic flowchart of the software testing and monitoring method provided in the embodiment of the present application. The method may further comprise step S111 before step S120.
And step S111, calculating to obtain the first threshold value in advance according to a first preset calculation formula, the preset accuracy, the preset recall rate and the preset minimum improvement degree.
Wherein the first preset calculation formula is as follows:
wherein alpha represents a preset significance level calculated according to the preset accuracy; 1-beta represents the preset recall rate, beta represents a false negative; pcRepresenting the preset probability of the target event occurring in the first user cluster, namely the preset probability of the target event occurring when the first user cluster uses the test version software; ptRepresenting a predetermined probability that the target event occurs in the second user cluster, i.e. the second user clusterProbability of occurrence of a target event when the user cluster uses the test version software; pcAnd PtAnd d represents the first threshold value, and N represents a second threshold value.
In this embodiment, it may be determined according to an actual monitoring scheme that P is obtained according to the preset minimum improvement degreecAnd PtIn the following description. Alternatively, for negative indicators (e.g., breakdown rate) corresponding to negative events, it is generally assumed that the experiment cannot raise the negative indicators. When the monitoring scheme is set to 10% of the negative index improvement Effect, that is, when the preset Minimum improvement degree Minimum Detectable Effect (MDE) is 10%, there are: pc=1/(2+mde),Pt1-1/(2+ mde), wherein mde represents the preset minimum degree of improvement. When the monitoring scheme is set to 10% of the reduction effect, then there are: pt=1/(2+mde),Pc=1-1/(2+mde)。
In the first preset calculation formula, α represented by the first formula is controlled as follows: both are 1/2 assuming that they are the same, i.e. the probability of the next target event occurring in the experimental and control groups is the same. In the case of N, d, the aggregate probability is leftAnd controlling the left-hand probability to be smaller than the expected alpha, so that the misjudgment probability can be ensured to be smaller than alpha.
In the first preset calculation formula, β represented by the second formula is controlled as follows: assuming that the two are different, in fact the experimental group is improved by mde compared to the control group, then: pc=1/(2+mde),Pt1-1/(2+ mde). Similarly, it can be seen that the total probability is left in the case of N, dIf the lifting rate exceeds mde, the occurrence probability of mde can be controlled to be higher than 1-beta; if the improvement exceeds mde, the probability is greater and the probability of a beta-error is lower.
Of the two formulas of the first preset calculation formula, the left formula represents: in this case, the probability of this occurrence. The first equation assumes that there is virtually no difference, and it is wrong if this happens, and the error rate is controlled to be less than α. The second equation assumes that mde is actually raised, which is not the case and is wrong, and the error rate is controlled to be less than 1-beta.
In actual application, the preset accuracy, the preset recall rate and the P can be set according to actual requirementscAnd Pt. The predetermined accuracy and the recall rate are the expected accuracy and the expected recall rate, respectively. And calculating a mode of obtaining a preset significance level based on the preset accuracy, and determining whether to perform one-way detection or two-way detection.
Optionally, when performing the one-way detection, the value of the preset significance level α substituted into the first preset calculation formula is a difference between 1 and a preset accuracy, that is, the preset accuracy is 1- α at this time.
When bidirectional detection is carried out, alpha is changed into a half of that of unidirectional detection, and beta is unchanged. That is, the predetermined significance level α substituted into the first predetermined calculation formula is 1/2 that is the difference between 1 and the predetermined accuracy rate, i.e. the predetermined accuracy rate is 1-2 α at this time. For example, the predetermined accuracy is 90%, and when performing unidirectional detection, the value of α substituted into the first predetermined calculation formula is 10%, and when performing bidirectional detection, the value of α substituted into the formula is half of 10%, 5%. That is, when the rising and falling of the negative indicator are concerned at the same time and the overall error rate is controlled to be α, half of α is consumed in the positive direction (i.e., rising) and the other half is consumed in the negative direction (i.e., falling), so that the preset significance level substituted into the first preset calculation formula is 1/2 which is the difference between 1 and the preset accuracy.
Correspondingly, it can be understood that, when performing unidirectional detection, it may be determined whether to subtract the second time from the first time to obtain the time difference or to subtract the first time from the second time to obtain the time difference according to a direction corresponding to the unidirectional detection. For example, when the positive detection is performed, that is, when the negative indicator of the experimental group of interest is increased compared to the negative indicator of the control group, the difference of the number of times is obtained by subtracting the second number from the first number. And when the bidirectional detection is carried out, the frequency difference value is the absolute value of the difference between the first frequency and the second frequency.
Referring to fig. 4, fig. 4 is a third schematic flow chart of a software testing monitoring method according to an embodiment of the present application. After step S140, when the number difference does not reach the first threshold, the method may further include step S160 and step S170.
Step S160, determining whether the sum of the first number and the second number reaches a second threshold.
Step S170, the monitoring is stopped.
And the time difference is smaller than the first threshold value, which indicates that the test does not exceed the bearing range in the tester at present. In this case, the sum of the first number of times and the second number of times may be calculated, the sum is regarded as a number sum value, and it is determined whether the number sum value reaches the second threshold value, that is, whether the number sum value is equal to the second threshold value. If the result is equal to the threshold, the test can be regarded as safe, and the monitoring can be stopped at the moment so as to save resources. If not, the data can be continuously collected and monitored.
The software test monitoring method is exemplified below by way of example.
Assume that the monitoring scheme is: the preset accuracy rate 1-alpha is 99%, compared with the improvement effect of 10% of a control group (namely mde is 10%), the unidirectional detection is carried out at the moment, and the preset recall rate 1-beta is 90%; substituting the value into the first preset calculation formula, the first threshold value is 200, and the second threshold value is 6004. With data collection, the following actions are performed: judging whether the frequency difference obtained by subtracting the frequency of the target event in the control group from the frequency of the target event in the experimental group reaches 200 times, if so, generating alarm information, namely, alarming the testing staff; if not, judging whether the sum of the occurrence frequency of the target events in the experimental group and the occurrence frequency of the target events in the control group reaches 6004, if so, considering safety and stopping monitoring, and if not, continuing to acquire data and monitoring.
Thus, it can be checked whether the probability of the target event occurring is the same between the experimental group and the control group.
Whether the increase/decrease exceeds the threshold value is more concerned in part of the actual scenes, and optionally, as another possible implementation manner, an alarm may be given when the increase/decrease of the probability of the target event actually occurring in the experimental group and the control group exceeds a preset change threshold value. It is to be understood that, when a boost is concerned, the preset variation threshold may be a preset boost threshold; when a decrease is concerned, the preset variation threshold may be a preset decrease threshold.
As an alternative implementation, on the basis of the method shown in fig. 3, secondary filtering may be performed in such a manner that the alarm information is generated when the threshold value is greater than the preset change threshold value. And calculating to obtain the actual variation amplitude according to the first times, the second times, the first total times and the second total times. The actual variation amplitude is the increasing or decreasing condition of the probability that the target event respectively occurs in the first user cluster and the second user cluster in practice obtained by the test data. The first total times are the total times of using the test version software by the first user cluster, and the second total times are the total times of using the reference version software by the second user cluster. After obtaining the actual variation amplitude, it may be determined whether the actual variation amplitude is greater than the preset variation amplitude, and if so, the step S150 is executed: and generating alarm information.
The first number, the second number, the first total number and the second total number may be used to obtain an initial change condition, and then point estimation may be performed according to the initial change condition to obtain the actual change amplitude. Wherein point estimation (point estimation) is the estimation of the overall parameters using sample statistics.
For example, if the attention is on the lifting, the preset change threshold may be a preset lifting threshold, specifically set to 20%; then, a quotient obtained by dividing a/A by B/B can be calculated, then 1 is subtracted from the quotient, point estimation is carried out according to the obtained difference value, the actual lifting amplitude is obtained, whether the actual lifting amplitude is greater than a preset lifting threshold value by 20% or not is judged, and if the actual lifting amplitude is greater than the preset lifting threshold value, alarm information is generated. During the judgment, the point estimation can be directly carried out on the obtained quotient, then whether the point estimation result is larger than 1.2 or not is judged, and if the point estimation result is larger than 1.2, the alarm information is generated. a represents the first degree, a represents the first total degree, B represents the second degree, and B represents the second total degree.
As another optional implementation, on the basis of the method shown in fig. 3, the number difference may be obtained in the following manner, so that the alarm information is generated when the number is greater than the preset change threshold. And correcting the first times according to a preset change threshold value to obtain the processed first times. For example, still taking the example that whether the lift exceeds the preset lift threshold by 20% or not, the first number may be divided by 1.2, and the result may be used as the processed first number. And then obtaining the frequency difference value through difference value operation according to the processed first frequency and the second frequency. If the difference is greater than the first threshold, step S150 is executed: and generating alarm information.
As another optional implementation manner, on the basis of the method shown in fig. 2, the first threshold may be obtained by calculating in advance according to a second preset calculation formula, the preset accuracy, the preset recall rate, the preset minimum improvement degree, and a preset change threshold, so that the alarm information is generated when the first threshold is greater than the preset change threshold, and other processing logics are unchanged. Wherein the second preset calculation formula is:
wherein alpha represents a preset significance level calculated according to the preset accuracy, and 1-beta represents the preset recall rate,Pc0Representing a preset initial probability, P, that the target event occurs in the first user clustert0Representing a preset initial probability, P, that the target event occurs in the second user clusterc0And Pt0Calculated according to said preset variation threshold, PcRepresenting a preset probability, P, that the target event occurs in the first user clustertRepresenting a preset probability, P, that the target event occurs in the second user clustercAnd PtAccording to Pc0、Pt0And the preset minimum improvement degree is calculated, d represents the first threshold value, and N represents the second threshold value.
Optionally, taking the improvement of the negative indicator as an example, there are: pc0=0.5-threshold,Pt00.5+ threshold, wherein threshold represents a preset lift threshold as the preset change threshold when lift is of interest; pc=Pc0/[Pc0+Pt0*(1+mde)],Pt=1-Pc。
When the time difference value is used, whether the time difference value reaches a first threshold value calculated according to the second preset formula or not can be judged, and if the time difference value reaches the first threshold value, the alarm information is generated. If not, whether the times and the values reach a second threshold value calculated according to the second preset formula or not can be judged, and if so, the monitoring can be stopped.
Therefore, according to the embodiment of the application, whether the risk exceeds the bearing range of the testing personnel can be accurately monitored in real time under the condition that the testing personnel set the preset accuracy, the recall rate and the monitoring level (namely the preset minimum improvement degree and the preset change threshold value) according to actual requirements, alarm information is generated when the risk exceeds the bearing range of the testing personnel, the testing personnel can conveniently take corresponding measures in time, and the testing is prevented from greatly damaging user experience.
In order to execute the corresponding steps in the above embodiments and various possible manners, an implementation manner of the software testing and monitoring apparatus 200 is given below, and optionally, the software testing and monitoring apparatus 200 may adopt the device structure of the electronic device 100 shown in fig. 1. Further, referring to fig. 5, fig. 5 is a block diagram illustrating a software testing and monitoring apparatus 200 according to an embodiment of the present disclosure. It should be noted that the basic principle and the generated technical effect of the software testing and monitoring device 200 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no part of the present embodiment is mentioned, and reference may be made to the corresponding contents in the above embodiments. The software test monitoring apparatus 200 may include: a data acquisition module 210, a calculation module 220, a judgment module 230 and a processing module 240.
The data acquisition module 210 is configured to obtain a first number of times that a first user cluster generates a target event when using a test version software, and obtain a second number of times that a second user cluster generates the target event when using a reference version software. The target event is a negative event influencing the use experience of the user;
the calculating module 220 is configured to calculate the difference between the first number and the second number.
The determining module 230 is configured to determine whether the time difference reaches a first threshold. And the first threshold is obtained by calculating according to a preset correct rate, a preset recall rate and a preset minimum improvement degree through a simple sequential inspection algorithm.
The processing module 240 is configured to generate alarm information when the time difference reaches the first threshold. Wherein the warning message is used to prompt to stop using the test version software in the first user cluster and/or prompt to stop using the reference version software in the second user cluster.
Optionally, in this embodiment, the processing module 240 is further configured to calculate the first threshold in advance according to a first preset calculation formula, the preset accuracy, the preset recall rate, and a preset minimum improvement degree. Wherein the first preset calculation formula is as follows:
wherein alpha represents a preset significance level calculated according to the preset accuracy, 1-beta represents the preset recall rate, and P representscRepresenting a preset probability, P, that the target event occurs in the first user clustertRepresenting a preset probability, P, that the target event occurs in the second user clustercAnd PtAnd d represents the first threshold value, and N represents a second threshold value.
Alternatively, the modules may be stored in the memory 110 shown in fig. 1 in the form of software or Firmware (Firmware) or may be fixed in an Operating System (OS) of the electronic device 100, and may be executed by the processor 120 in fig. 1. Meanwhile, data, codes of programs, and the like required to execute the above-described modules may be stored in the memory 110.
The embodiment of the application also provides a readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the software test monitoring method.
To sum up, the software test monitoring method, the software test monitoring device, the electronic device, and the readable storage medium according to the embodiments of the present application obtain a first number of times that a first user cluster generates a target event when using a test version software, and a second number of times that a second user cluster generates the target event when using a reference version software, where the target event is a negative event affecting user experience; then judging whether the time difference calculated according to the first time and the second time reaches a first threshold value calculated according to a preset accuracy, a preset recall rate and a preset minimum improvement degree through a simple sequential inspection algorithm; and if so, generating alarm information for prompting to stop the test. Therefore, whether the test is risky or not can be judged by combining a simple sequential inspection algorithm, and the alarm information is generated when the test is risky, so that the test staff can conveniently take corresponding measures in time, and the test is prevented from greatly hurting the user experience.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A software test monitoring method is characterized by comprising the following steps:
the method comprises the steps of obtaining a first frequency of target events when a first user cluster uses test version software, and obtaining a second frequency of the target events when a second user cluster uses reference version software, wherein the target events are negative events influencing user use experience;
calculating according to the first times and the second times to obtain a time difference value;
judging whether the time difference reaches a first threshold value, wherein the first threshold value is obtained by calculation through a simple sequential inspection algorithm according to a preset correct rate, a preset recall rate and a preset minimum improvement degree;
and if the time difference reaches the first threshold value, generating alarm information, wherein the alarm information is used for prompting to stop using the test version software in the first user cluster and/or prompting to stop using the reference version software in the second user cluster.
2. The method of claim 1, further comprising:
the first threshold is obtained by calculation in advance according to a first preset calculation formula, the preset accuracy, the preset recall rate and the preset minimum improvement degree, wherein the first preset calculation formula is as follows:
wherein alpha represents a preset significance level calculated according to the preset accuracy, 1-beta represents the preset recall rate, and P representscRepresenting a preset probability, P, that the target event occurs in the first user clustertRepresenting a preset probability, P, that the target event occurs in the second user clustercAnd PtAnd d represents the first threshold value, and N represents a second threshold value.
3. The method of claim 1, further comprising:
calculating in advance according to a second preset calculation formula, the preset accuracy, the preset recall rate, the preset minimum improvement degree, and a preset change threshold to obtain the first threshold, wherein the second preset calculation formula is as follows:
wherein alpha represents a preset significance level calculated according to the preset accuracy, 1-beta represents the preset recall rate, and P representsc0Representing a preset initial probability, P, that the target event occurs in the first user clustert0Representing a preset initial probability, P, that the target event occurs in the second user clusterc0And Pt0Calculated according to said preset variation threshold, PcRepresenting a preset probability, P, that the target event occurs in the first user clustertRepresenting a preset probability, P, that the target event occurs in the second user clustercAnd PtAccording to Pc0、Pt0And the preset minimum improvement degree is calculated, d represents the first threshold value, and N represents the second threshold value.
4. The method according to any one of claims 1-3, wherein after determining whether the difference in times reaches a first threshold, the method further comprises:
if the time difference is smaller than the first threshold, judging whether the time sum of the first time and the second time reaches a second threshold, wherein the second threshold is obtained by calculation according to the preset accuracy, the preset recall rate and the preset minimum improvement degree through a simple sequential inspection algorithm;
and if the times and the values reach the second threshold value, stopping monitoring.
5. The method according to claim 2, wherein in the case that the difference of the number of times is not less than the first threshold, before generating the warning information, the method further comprises:
calculating actual variation amplitudes of the probabilities that the target events respectively occur in the first user cluster and the second user cluster according to the first times, the second times, the first total times and the second total times, wherein the first total times are the total times that the first user cluster uses the software of the test version, and the second total times are the total times that the second user cluster uses the software of the reference version;
judging whether the actual change amplitude is larger than a preset change threshold value or not;
and if the actual change amplitude is larger than the preset change threshold, executing the step of generating the alarm information.
6. The method of claim 2, wherein calculating a difference between the first number of times and the second number of times comprises:
correcting the first times according to a preset change threshold value to obtain the processed first times;
and obtaining the frequency difference value through difference value operation according to the processed first frequency and the second frequency.
7. A software test monitoring device, comprising:
the data acquisition module is used for acquiring a first frequency of target events when a first user cluster uses the test version software and acquiring a second frequency of the target events when a second user cluster uses the reference version software, wherein the target events are negative events influencing the user experience;
the calculating module is used for calculating according to the first times and the second times to obtain a time difference value;
the judging module is used for judging whether the time difference reaches a first threshold value, wherein the first threshold value is obtained by calculation according to a preset correct rate, a preset recall rate and a preset minimum improvement degree through a simple sequential inspection algorithm;
and the processing module is used for generating alarm information when the time difference reaches the first threshold, wherein the alarm information is used for prompting to stop using the test version software in the first user cluster and/or prompting to stop using the reference version software in the second user cluster.
8. The apparatus of claim 7,
the calculation module is further configured to calculate in advance according to a first preset calculation formula, the preset accuracy, the preset recall rate, and a preset minimum improvement degree to obtain the first threshold, where the first preset calculation formula is:
wherein alpha represents a preset significance level calculated according to the preset accuracy, 1-beta represents the preset recall rate, and P representscRepresenting a preset probability, P, that the target event occurs in the first user clustertRepresenting a preset probability, P, that the target event occurs in the second user clustercAnd PtAnd d represents the first threshold value, and N represents a second threshold value.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the software test monitoring method of any one of claims 1 to 6.
10. A readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the software test monitoring method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110177312.XA CN112905463B (en) | 2021-02-07 | 2021-02-07 | Software test monitoring method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110177312.XA CN112905463B (en) | 2021-02-07 | 2021-02-07 | Software test monitoring method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112905463A true CN112905463A (en) | 2021-06-04 |
CN112905463B CN112905463B (en) | 2023-10-27 |
Family
ID=76123028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110177312.XA Active CN112905463B (en) | 2021-02-07 | 2021-02-07 | Software test monitoring method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112905463B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946353A (en) * | 2021-09-30 | 2022-01-18 | 北京五八信息技术有限公司 | Data processing method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140196011A1 (en) * | 2013-01-09 | 2014-07-10 | International Business Machines Corporation | Automatic regression testing based on cyclomatic complexity |
CN110826071A (en) * | 2019-09-24 | 2020-02-21 | 平安科技(深圳)有限公司 | Software vulnerability risk prediction method, device, equipment and storage medium |
CN112131079A (en) * | 2020-09-22 | 2020-12-25 | 北京达佳互联信息技术有限公司 | Data monitoring method and device, electronic equipment and storage medium |
-
2021
- 2021-02-07 CN CN202110177312.XA patent/CN112905463B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140196011A1 (en) * | 2013-01-09 | 2014-07-10 | International Business Machines Corporation | Automatic regression testing based on cyclomatic complexity |
CN110826071A (en) * | 2019-09-24 | 2020-02-21 | 平安科技(深圳)有限公司 | Software vulnerability risk prediction method, device, equipment and storage medium |
CN112131079A (en) * | 2020-09-22 | 2020-12-25 | 北京达佳互联信息技术有限公司 | Data monitoring method and device, electronic equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113946353A (en) * | 2021-09-30 | 2022-01-18 | 北京五八信息技术有限公司 | Data processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112905463B (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020052147A1 (en) | Monitoring device fault detection method and apparatus | |
CN108696368B (en) | Network element health state detection method and equipment | |
CN110445680B (en) | Network traffic anomaly detection method and device and server | |
US20090024356A1 (en) | Determination of root cause(s) of symptoms using stochastic gradient descent | |
CN116611712B (en) | Semantic inference-based power grid work ticket evaluation system | |
JP5768983B2 (en) | Contract violation prediction system, contract violation prediction method, and contract violation prediction program | |
US10581665B2 (en) | Content-aware anomaly detection and diagnosis | |
CN107391335B (en) | Method and equipment for checking health state of cluster | |
JP2007310665A (en) | Process monitoring apparatus | |
CN116757367B (en) | Three-dimensional visual power grid operation data analysis system | |
CN109597746B (en) | Fault analysis method and device | |
CN107395608B (en) | Network access abnormity detection method and device | |
JP2015028700A (en) | Failure detection device, failure detection method, failure detection program and recording medium | |
CN112650608B (en) | Abnormal root cause positioning method, related device and equipment | |
CN116747528B (en) | Game background user supervision method and system | |
CN110855703A (en) | Intelligent risk identification system and method and electronic equipment | |
US10900869B2 (en) | Detection of bearing carbonization failure in turbine systems | |
CN114936675A (en) | Fault early warning method and device, storage medium and electronic equipment | |
CN110808864A (en) | Communication early warning method, device and system | |
CN112905463B (en) | Software test monitoring method and device, electronic equipment and readable storage medium | |
CN113590427B (en) | Alarm method, device, storage medium and equipment for monitoring index abnormality | |
CN108683662B (en) | Individual online equipment risk assessment method and system | |
CN114116391A (en) | Redis instance health detection method, device, equipment and storage medium | |
US20190041838A1 (en) | Detection of temperature sensor failure in turbine systems | |
CN116666785A (en) | Energy storage battery system safety early warning method and device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |