CN112527573A - Interface testing method, device and storage medium - Google Patents

Interface testing method, device and storage medium Download PDF

Info

Publication number
CN112527573A
CN112527573A CN201910886571.2A CN201910886571A CN112527573A CN 112527573 A CN112527573 A CN 112527573A CN 201910886571 A CN201910886571 A CN 201910886571A CN 112527573 A CN112527573 A CN 112527573A
Authority
CN
China
Prior art keywords
data
result
interface
test data
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910886571.2A
Other languages
Chinese (zh)
Other versions
CN112527573B (en
Inventor
钱明芝
蔡铁炯
冯彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910886571.2A priority Critical patent/CN112527573B/en
Publication of CN112527573A publication Critical patent/CN112527573A/en
Application granted granted Critical
Publication of CN112527573B publication Critical patent/CN112527573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • G06F11/221Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test buses, lines or interfaces, e.g. stuck-at or open line faults
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2273Test methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses an interface testing method, an interface testing device and a storage medium, wherein the method comprises the following steps: collecting production environment data; performing cluster analysis on the production environment data to obtain a final result of the cluster analysis; determining final test data from the production environment data according to a final result of the cluster analysis; and testing the interface according to the final test data to obtain a final test result. According to the technical scheme, the production environment data are screened based on the cluster analysis method to determine the test data of the interface, and compared with the test data which are determined based on manual simulation or selection of a small amount of on-line data, the time for screening the test data can be saved, and the efficiency of interface test is improved.

Description

Interface testing method, device and storage medium
Technical Field
The embodiment of the application relates to an electronic communication technology, and relates to but is not limited to an interface testing method, an interface testing device and a storage medium.
Background
Interface testing is a test that tests an interface between components of a system. The interface test is mainly used for detecting interaction points between external systems and between internal subsystems. The key point of the test is to check the exchange, transmission and control management process of data, the mutual logic dependency relationship between systems and the like.
In the related technology, the testing process mainly depends on a requirement specification and interface requirements, and the process of data preparation is extremely time-consuming due to the fact that interface parameters are complex, test data need to cover a plurality of scenes and need to be close to a real scene as much as possible; when the on-line regression test is carried out, the historical data and the historical flow need to be subjected to copy test, and the processing difficulty is particularly high; the tester can only screen tens of thousands of test data and approximate data by virtue of own experience.
Disclosure of Invention
In view of this, embodiments of the present application provide an interface testing method, an interface testing device, and a storage medium, so as to solve the problem in the related art that the testing data of the interface can only be determined based on manual simulation or selection of a small amount of online data.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an interface testing method, where the method includes:
collecting production environment data;
performing cluster analysis on the production environment data to obtain a final result of the cluster analysis;
determining final test data from the production environment data according to a final result of the cluster analysis;
and testing the interface according to the final test data to obtain a final test result.
In a second aspect, an embodiment of the present application provides an interface testing apparatus, where the apparatus includes:
the acquisition unit is used for acquiring production environment data;
the analysis unit is used for carrying out cluster analysis on the production environment data to obtain a final result of the cluster analysis;
a first determining unit, configured to determine final test data from the production environment data according to a final result of the cluster analysis;
and the second determining unit is used for testing the interface according to the final test data to obtain a final test result.
In a third aspect, an embodiment of the present application provides an interface testing apparatus, where the interface testing apparatus at least includes: a processor and a memory for storing executable instructions operable on the processor, wherein:
when the processor is used for executing the executable instruction, the executable instruction executes the interface testing method provided by the above embodiment.
In a fourth aspect, the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when executed by a processor, the computer-executable instructions implement the interface testing method provided in the foregoing embodiment.
According to the interface testing method, the device and the storage medium, the production environment data are screened based on the cluster analysis method to determine the testing data of the interface, the problem that the testing data of the interface can only be determined based on manual simulation or selection of a small amount of on-line data in the related technology is solved, the time for screening the testing data is saved, and the technical effect of improving the efficiency of interface testing is achieved.
Drawings
Fig. 1 is a first schematic flow chart illustrating an implementation of an interface testing method in the related art;
FIG. 2 is a schematic diagram illustrating a second implementation flow of an interface testing method in the related art;
FIG. 3 is a flow chart illustrating an implementation of determining interface test data in the related art;
fig. 4 is a first schematic flow chart illustrating an implementation process of the interface testing method according to the embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation process of the interface testing method according to the embodiment of the present application;
fig. 6 is a schematic diagram of an implementation process of cluster analysis provided in the embodiment of the present application;
fig. 7 is a schematic structural diagram of an interface testing apparatus according to an embodiment of the present disclosure;
fig. 8 is a specific hardware structure of an interface testing apparatus according to an embodiment of the present disclosure.
Detailed Description
In the related art, the interface test flow is similar to the functional test flow, and most of the interface test flow is implemented manually, as shown in fig. 1, the interface test flow mainly includes the following steps:
step 101, determining interface requirements.
Here, the well-defined interface requirement is familiar in a manual mode, and the application scenario of the interface is known on the basis of the well-defined interface requirement, for example, the interface can be used for realizing the exchange, transmission and control management among which systems, and determining the details about the test as much as possible.
Step 102, an interface test scheme is formulated according to interface requirements, and an interface test case is designed.
The corresponding interface test scheme is established according to the interface requirement in a manual mode, when the interface test scheme is established, besides the function of the interface, the performance and the safety of the interface need to be considered, then the corresponding interface test scheme is established according to the corresponding parameters, and an interface test case is designed according to the specific interface requirement and the interface test scheme.
The interface test case forms the basis for designing and formulating the test process. The interface test case is a group of test input, execution conditions and expected results compiled for a target interface so as to test a certain program path or verify whether a certain specific requirement is met, and the interface test case is used for scientifically organizing and summarizing behavior activities of interface test so as to convert the behavior of the interface test into a manageable mode; meanwhile, the test case is also one of the methods for specifically quantifying the test, and the test cases corresponding to different types of interfaces may also be different.
And step 103, evaluating the interface test case.
Here, the interface test case is also manually reviewed, and the interface test scheme and the interface test case are sent to the relevant people participating in the review, where the relevant people include: and if the related personnel provide improvement opinions, the interface test case can be modified according to the opinions of the related personnel until the evaluation is passed.
Step 104, determining interface test data.
Here, the interface test data is data that is simulated by a tester according to a preset data rule or data of a selected small number of on-line environments. In the related art, a tester needs to simulate interface test data according to a preset data rule, and also needs to prepare an interface test tool and an automated test case. Wherein, the interface testing tool can be at least one of Postman, Jmete, SoapUI and Python; the automatic test case is the interface test case which passes the review.
And 105, executing the interface test case on the interface to be tested.
After the interface test tool is installed, the simulated interface test data needs to be loaded, the interface test is executed according to the corresponding interface test case, the interaction process between the systems and the processing process of the interface test data are tested to obtain a test result, and if the test result represents that a program error (bug) is detected, the bug is submitted to the corresponding equipment.
And step 106, sending an interface test report.
And generating an interface test report according to the test result in the step, submitting the interface test report to corresponding workers, and repairing the bug by the corresponding workers according to the interface test report. If a bug exists subsequently, testing can be performed in the next round of interface testing, and correspondingly, if an interface is changed, the interface testing case needs to be modified according to the changed interface, and the corresponding interface is tested through the modified interface testing case.
If the interface test is carried out through the automation of the interface test case, a continuous integrated system Jenkins is added in the process of realizing the automation of the interface test case; and if the bug caused by the code change is tested, generating an interface test report according to the tested bug, and submitting the interface test report. Jenkins is an open source software project, is a continuous integration tool developed based on Java, is used for monitoring continuous and repeated work, and aims to provide an open and easy-to-use software platform to enable continuous integration of software to be possible.
In the testing process, the preparation of interface testing data, the interface calling and the result verification are the most core steps in the interface testing process. As shown in fig. 2, the interface testing process mainly includes the following steps:
step 201, determining interface test data.
Fig. 3 is a flowchart illustrating an implementation process of determining interface test data in the related art, and as shown in fig. 3, determining interface test data mainly includes the following steps:
step 301, interface test data preparation.
Here, the interface test data is data that is simulated by a tester according to a preset data rule or data of a selected small number of on-line environments. The interface test data mainly comprises: the parameters data, DataBase (DB) data, and virtual (mock) data are interfaced. The interface input parameter has a value according to itself, and in the test process, the value contained in the interface input parameter is transmitted to a function in an interface test case so as to process the value through the function; the mock data is virtual data, and in the testing process, for some objects which are not easy to construct or obtain, a virtual object is used for creating a testing method which is convenient for testing, and the virtual object is the mock object. The mock object is a substitute for the real object during debugging.
Step 302, interface calling.
Here, after the interface test data is prepared, the test tool selected manually calls the interface to be tested to prepare for the interface test.
Step 303, the result is checked.
Here, the test result needs to be compared with a preset test result to check to determine whether the interface is normal. The data to be checked mainly includes: interface parameter data and DB data. The interface parameter data is needed by a main function, wherein the main function is a function included by an interface test case. The interface out parameter has no value according to itself, and can take out a value (equivalent to a return value of the function) from the function.
And step 304, deleting the data.
Here, if the test of the current round is finished, the interface test data can be deleted, and new interface test data can be imported again to perform the next round of test.
Step 202, an interface test tool is selected.
Here, the test worker may select the interface test tool calling interface according to experience, and the interface test tool may be at least one of Postman, Jmete, soap ui, and Python.
Step 203, using the assertion in the interface test tool to determine the returned result.
Here, an assertion is a programming term, expressed as some boolean expression whose value may be true at a certain point in the program, and assertion verification may be enabled and disabled at any time, and in general, an assertion may be enabled at test time and disabled at deployment time.
In the related technology, because the interface parameters are complex, the test data needs to cover a plurality of scenes and needs to be close to the real scene as much as possible, the process of data preparation is extremely time-consuming; when the on-line regression test is carried out, the historical data and the historical flow need to be subjected to copy test, and the processing difficulty is particularly high; how to select tens of thousands of test data and how to select approximate data, and a tester can only make a selection by virtue of own experience.
Aiming at the problems in the related art, the embodiment of the application provides a method for testing an artificial intelligence combined interface, and the method can be used for automatically testing the interface quickly, automatically and accurately, saving the data screening time and improving the working efficiency of the whole testing process. And the following functions can be realized by the method: (1) providing an integral scheme combining a clustering algorithm and an interface test; (2) providing a script realized based on a clustering algorithm for screening mass data, and determining corresponding interface test data according to daily online data; (3) the script realized based on the hierarchical clustering algorithm is provided, and is used for enriching scenes and covering more historical data.
In the embodiment of the application, the interface test data can be screened aiming at mass online data, and the screening comprises the implementation of interface characteristic data extraction, Gower distance, a dividing (PAM) algorithm Around a central point and a condensation algorithm, so that the category of the interface test data can be enriched; and clustering analysis is carried out on the N days of data again through a hierarchical clustering algorithm, so that the aims of covering more historical scenes and being closer to the production environment can be achieved. The problem that the conventional data preparation only can be realized by manually simulating or selecting a small amount of online data is solved, and representative and reliable test data is screened from online massive data; different from the prior semi-automatic interface method, the embodiment of the application is combined with an online data automatic screening method, so that the full automation of interface testing from data preparation to testing completion is realized.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Fig. 4 is a first schematic flow chart of an implementation process of the interface testing method provided in the embodiment of the present application, and as shown in fig. 4, the method mainly includes the following steps:
step 401, collecting production environment data.
Here, the production environment refers to an actually used system environment, and since there are tens of thousands of download record data in the production environment every day, the download record data is the production environment data.
Wherein the production environment data includes a continuous variable and a discrete variable, wherein the continuous variable includes: file size, download duration; the discrete variables include: file type, whether downloading is successful, and whether packaging downloading is performed. Wherein, the file type, the file size and whether to pack and download the file are input parameters of the download interface; whether the downloading is successful or not and the downloading time length are output parameters of the downloading interface. In the embodiment of the application, the file type, the file size, whether to package and download, whether to download successfully and the downloading time length can be selected to establish the data feature set.
Step 402, performing cluster analysis on the production environment data to obtain a final result of the cluster analysis.
It should be noted that, performing cluster analysis on the production environment data to obtain a final result of the cluster analysis includes: performing cluster analysis on the production environment data based on the first mathematical model to obtain a first result of the cluster analysis; and performing cluster analysis on the first result based on the second mathematical model to obtain a final result of the cluster analysis.
It should be noted that, performing cluster analysis on the production environment data based on the first mathematical model to obtain a first result of the cluster analysis includes: determining the production environment data as original test data; performing clustering analysis on the original test data based on a PAM algorithm to obtain a first result of the clustering analysis; correspondingly, performing cluster analysis on the first result based on the second mathematical model to obtain a final result of the cluster analysis, including: and performing clustering analysis on the first result based on a hierarchical clustering algorithm to obtain a final result of the clustering analysis.
It should be noted that a set number of clusters are formed based on the distance between the points of each original test data, where a cluster is a set of similar original test data; determining the number of clusters according to the contour coefficient of each cluster; selecting a cluster with the best performance based on the number of clusters; and determining the cluster with the best performance as a first result of the cluster analysis.
It should be noted that, forming a set number of clusters based on the distance between the points of each original test data includes: determining the points where the set number of original test data are located as cluster center points; traversing all the original test data, and calculating the distance between the point where each original test data is located and the center point of each cluster; determining a cluster central point which is closest to the point where each original test data is located; and classifying each original test data into the corresponding nearest cluster center point to form clusters with a set number, wherein the formed clusters with the set number are the final result of the obtained cluster analysis.
And step 403, determining final test data from the production environment data according to the final result of the cluster analysis.
Here, at least one cluster may be randomly selected from the final result of the cluster analysis, that is, the formed set number of clusters, and the original test data included in the cluster may be used as the final test data of the download interface.
And step 404, testing the interface according to the final test data to obtain a final test result.
It should be noted that, in any of the above embodiments, the method for testing the interface according to the final test data to obtain the final test result includes: comparing an initial test result of testing the interface according to the final test data with a set test result to obtain a comparison result; and determining a final test result according to the comparison result.
It should be noted that, in any of the above embodiments, the method further includes: establishing a data feature set based on the original test data; determining a measurement method for calculating the distance of each point of the original test data according to the type of the data feature set; based on the determined distance metric, the distance between the points at which the respective raw test data are located is determined.
Here, a data feature set may be formed from the original test data, and the first step of the cluster analysis is to determine a metric method between the original test data, for example, if the data feature set includes multiple types of data, and belongs to a mixed type data set, the Gower distance may be selected as the metric method according to the characteristic that the data feature set is the mixed type data set.
Fig. 5 is a schematic view of an implementation flow of the interface testing method according to the embodiment of the present application, and as shown in fig. 5, the testing flow mainly includes the following steps:
step 501, collecting production environment data.
Here, the collected production environment data is determined as the original test data, and the production environment refers to the system environment actually used. In the embodiment of the application, historical production environment data needs to be collected, and the collected historical production environment data is applied to a test environment and is used as original test data.
Wherein the production environment data includes a continuous variable and a discrete variable, wherein the continuous variable includes: file size, download duration; the discrete variables include: file type, whether downloading is successful, and whether packaging downloading is performed. Wherein, the file type, the file size and whether to pack and download the file are input parameters of the download interface; whether the downloading is successful or not and the downloading time length are output parameters of the downloading interface. In the embodiment of the application, the file type, the file size, whether to package and download, whether to download successfully and the downloading time length can be selected to establish the data feature set.
Step 502, performing cluster analysis on the collected production environment data to obtain a cluster analysis result.
Fig. 6 is a schematic view of an implementation process of cluster analysis provided in the embodiment of the present application, and as shown in fig. 6, the cluster analysis mainly includes the following steps:
step 601, determining a measurement method for calculating the distance between the points of each original test data according to the type of the data feature set.
Here, the collected production environment data is determined as the original test data, a data feature set may be formed from the original test data, and the first step of the cluster analysis is to determine a measurement method between the original test data, for example, if the data feature set includes a plurality of types of data, and belongs to a mixed type data set, a Gower distance may be selected as the measurement method according to the characteristic that the data feature set is the mixed type data set.
For continuous type variables in this data feature set: carrying out normalization processing on the continuous variable by utilizing the Manhattan distance, and then calculating the Gower distance according to a normalization processing result; for discrete variables in this feature set: firstly, normalizing variables containing k categories to be between [0, 1 ]; i.e. converted into k variables with the value range of 0-1, and then further calculated by using the Dice coefficient. In the embodiment of the application, the Gower distance can be calculated by using a Daisy function. When the original variable is a mixture type, or the metric is set to "power", the power distance between the respective original test data in the data set may be calculated by the daisy () method. The source code for calculating the Gower distance is as follows:
Daisy(x,metric=c("euclidean","manhattan","gower"),stand=FALSE,type=list(),weights=rep.int(1,p),...);
wherein, x: a value matrix or data box, variables of value type are identified as interval scaling variables, variables of factor type are identified as nominal attributes, ordering factors are identified as ordered variables, other variable types need to be specified in the type parameter.
Wherein, metric: the character types, valid values are "euclidean" (default), "manhattan", and "power".
stand: a logical value, characterizing whether the data is normalized by column before calculating dissimilarity.
type: list type, which specifies the type of variables in x, valid list items are "orratio" (for ordinal variables), "logatio" (for logarithmic conversion), "asymm" (for asymmetric binary attributes), and "symm" (for symmetric binary attributes and nominal attributes).
weights: a numeric vector (the number of columns with length x, p ═ ncol (x))) for the mixed type variables (or metric ═ gower), specifies the weight of each variable, the default weight being 1.
The Daisy function implements the processing of nominal, ordinal and binary attribute data by using Gower dissimilarity coefficients. If the variables of x are data of the nominal, ordinal and binary types, the function will ignore the metric and stand parameters and calculate the distance of the data matrix using the Gower coefficients. For the scalar data, the dissimilarity matrix may be calculated by setting "score", and before calculation, the data object needs to be normalized to [0, 1 ].
Step 602, performing cluster analysis on the original test data based on the PAM algorithm to form a plurality of similar sets of original test data, i.e. a plurality of clusters.
In the embodiment of the application, after the distance matrix of each original test data is calculated, a PAM algorithm is selected to construct a model. The main implementation steps for clustering the original test data by using the PAM algorithm are as follows:
a. randomly selecting points where k original test data are located, and setting the points as cluster center points;
b. traversing all the original test data, calculating the distance between each original test data and each cluster center point, and classifying each original test data into the cluster center point closest to the original test data to form k clusters;
c. for each cluster, finding out a point with the minimum distance to the point in the cluster where other original test data are located, and setting the point as a new cluster center point;
d. and c, repeating the step b until preset conditions are met, such as the number of the set clusters is reached, all the original test data are completely traversed, and the like.
Here, a plurality of clusters may be formed by randomly selecting k original test data and setting it as a cluster center point, traversing all the original test data, and sorting the original test data into the nearest cluster center point. The clusters are a set of similar original test data, all the original test data are similar to each other in the same cluster, the original test data between the clusters are different from each other, and k is a positive integer.
Step 603, determining the number of clusters according to the contour coefficient of each cluster, so as to select the cluster with the best performance.
Here, the optimal number of clusters can be selected using the contour coefficient. Here, the contour coefficient is an internal index for measuring the cluster dispersion, and the index has a value range of [ -1, 1], and the larger the value, the better. In the embodiment of the application, the clustering number can be determined by comparing the sizes of the contour coefficients under different clustering numbers so as to select the cluster with the best performance.
Taking the example that k clusters have been formed based on step 602 above, their contour coefficients can be calculated separately based on each vector in the cluster. For one point i, the calculation formula of the i vector contour coefficient s (i) is formula (1):
Figure BDA0002207470230000111
in formula (1), a (i) is the average value of the dissimilarity degree of the i vector to other points in the same cluster, namely the average value of the distances from the i vector to other points in all clusters to which the i vector belongs; and b (i) is the minimum value of the average dissimilarity degree of the i vector to other clusters, namely the minimum value of the average distance of the i vector to all the points of the clusters which are not self.
Here, the contour coefficients of all the points in the cluster are averaged to obtain the contour coefficient of each cluster, so as to determine the number of clusters according to the contour coefficient of each cluster, for example, when the contour coefficient of a certain cluster is determined to be smaller than the set contour coefficient, the cluster is removed; when the contour coefficient of a certain cluster is determined to be larger than or equal to the set contour times, the cluster is reserved, and the reserved cluster is further processed through the following steps.
And step 604, performing cluster analysis on the selected clusters according to a hierarchical clustering algorithm to obtain a final result of the cluster analysis.
Since the production environment data generated by the production environment every day has a certain degree of similarity and partial difference, in order to more accurately screen out the optimal interface test data, all clusters within a set number of days may be selected to be aggregated, for example, all clusters within the last 3 days or 7 days are aggregated, and whether the merged result is valid or not is determined. Assuming that N clusters are shared in the last 7 days, if the number of the clusters after combination is less than N/7, the flow is ended; if the number of clusters after merging is greater than or equal to N/7, the merging result is valid. In the embodiment of the application, the samples in the set number of days can be clustered by the agglomeration method, and the clustering by the agglomeration method mainly comprises the following steps:
a. calculating the distance between every two clusters;
here, the two clusters closest to each other are determined by calculating the distance between each two clusters, and the smaller the distance between the two clusters is, the greater the similarity between the two clusters is, wherein the distance between each two clusters can be calculated according to the distance between the cluster center points of each cluster.
b. Combining two clusters with the nearest distance to obtain a new cluster;
c. adding the merged new cluster into the original cluster, and then re-executing the step a;
d. until the number of clusters reaches a set value.
The number of the merged clusters is the recommended test data amount per day, and the recommended test data amount per day obtained by the clustering analysis method is greatly reduced compared with the data amount accumulated per day, so that the online data screening time is reduced, and the data screening reliability is improved. The above-described process of processing the production environment data by the cluster analysis method is similar to the process of performing the equivalence class division on the data.
Step 503, determining the final test data from the production environment data according to the result of the cluster analysis.
Here, at least one cluster may be randomly selected from the merged plurality of clusters, and the original test data included in the cluster may be used as the final test data of the download interface.
Step 504, comparing the initial test result of the interface test according to the final test data with the set test result to obtain a comparison result, and determining the final test result according to the comparison result.
And the data interface test on the daily online can be completed according to the comparison result to realize the full-automatic interface test. The set test result is determined according to the historical test result, and may be a set threshold range, and is used to determine whether the initial test result is within a normal range, so as to determine whether the interface is normal, for example, when the initial test result is within the set threshold range, the initial test result represents that the interface is normal; if the initial test result is not within the set threshold, the initial test result indicates that the interface is problematic.
In the embodiment of the application, interface test data clustering analysis is realized based on the Gower distance and PAM algorithm, and after the characteristics of input and output parameters of an interface are extracted, interface test data screening is realized by combining Artificial Intelligence (AI). Different from the way that testers test by dividing or randomly selecting data, the brand-new automatic reliable screening method provided by the embodiment of the application can calculate the distance between samples based on the Gower distance algorithm and perform clustering analysis on the samples based on the PAM algorithm to obtain the daily clustering result of the data.
Based on the same inventive concept of the above embodiments, an interface testing apparatus is provided in the embodiments of the present application, and fig. 7 is a schematic structural diagram of the interface testing apparatus provided in the embodiments of the present application, as shown in fig. 7, the interface testing apparatus 700 includes:
an acquisition unit 701 for acquiring production environment data;
an analyzing unit 702, configured to perform cluster analysis on the production environment data to obtain a final result of the cluster analysis;
a first determining unit 703, configured to determine final test data from the production environment data according to a final result of the cluster analysis;
a second determining unit 704, configured to test the interface according to the final test data to obtain a final test result.
The analysis unit 702 includes:
the first analysis module is used for carrying out cluster analysis on the production environment data based on a first mathematical model to obtain a first result of the cluster analysis;
and the second analysis module is used for carrying out cluster analysis on the first result based on a second mathematical model to obtain a final result of the cluster analysis.
It should be noted that, the first analysis module includes:
a determining submodule for determining the production environment data as original test data;
the first analysis submodule is used for carrying out clustering analysis on the original test data based on a PAM algorithm to obtain a first result of the clustering analysis;
correspondingly, the second analysis module comprises:
and the second analysis submodule is used for carrying out clustering analysis on the first result based on a hierarchical clustering algorithm so as to obtain a final result of the clustering analysis.
It should be noted that the first analysis submodule is specifically configured to:
forming a set number of clusters based on the distance of the points of the original test data, wherein the clusters are a set of similar original test data;
determining the number of clusters according to the contour coefficient of each cluster;
selecting a cluster with the best performance based on the cluster number;
and determining the cluster with the best performance as a first result of cluster analysis.
It should be noted that, the first analysis sub-module is further specifically configured to:
determining the points where the set number of original test data are located as cluster center points;
traversing all the original test data, and calculating the distance between the point where each original test data is located and the center point of each cluster;
determining a cluster central point which is closest to the point where each original test data is located;
and classifying each original test data into the corresponding nearest cluster center point to form a cluster with a set number.
It should be noted that the apparatus further includes:
the construction unit is used for establishing a data characteristic set based on the original test data;
a third determining unit, configured to determine, according to the type of the data feature set, a metric method for calculating a distance between points where the original test data are located;
and the fourth determining unit is used for determining the distance between the points of each original test data based on the distance measuring method.
Note that the second determining unit 704 includes:
the comparison module is used for comparing an initial test result of the interface test according to the final test data with a set test result to obtain a comparison result;
and the determining module is used for determining a final test result according to the comparison result.
The components in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
The integrated unit, if implemented in the form of a software functional module and not sold or used as a stand-alone product, may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or partially contributed to by the prior art, and the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Accordingly, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the steps described in the above embodiments.
Referring to fig. 8, a specific hardware structure of an interface testing apparatus 800 provided in an embodiment of the present application is shown, including: a network interface 801, a memory 802, and a processor 803; the various components are coupled together by a bus system 804. It is understood that the bus system 804 is used to enable communications among the components. The bus system 804 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 804 in FIG. 8. Wherein, the network interface 801 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
a memory 802 for storing a computer program capable of running on the processor 803;
a processor 803 for executing, when running the computer program, the following:
collecting production environment data;
performing cluster analysis on the production environment data to obtain a final result of the cluster analysis;
determining final test data from the production environment data according to a final result of the cluster analysis;
and testing the interface according to the final test data to obtain a final test result.
The processor 803 is further configured to, when running the computer program, perform:
performing cluster analysis on the production environment data based on a first mathematical model to obtain a first result of the cluster analysis;
and performing cluster analysis on the first result based on a second mathematical model to obtain a final result of the cluster analysis.
The processor 803 is further configured to, when running the computer program, perform:
determining the production environment data as original test data;
performing clustering analysis on the original test data based on a PAM algorithm to obtain a first result of the clustering analysis;
and performing clustering analysis on the first result based on a hierarchical clustering algorithm to obtain a final result of the clustering analysis.
The processor 803 is further configured to, when running the computer program, perform:
forming a set number of clusters based on the distance of the points of the original test data, wherein the clusters are a set of similar original test data;
determining the number of clusters according to the contour coefficient of each cluster;
selecting a cluster with the best performance based on the cluster number;
and determining the cluster with the best performance as a first result of cluster analysis.
The processor 803 is further configured to, when running the computer program, perform:
determining the points where the set number of original test data are located as cluster center points;
traversing all the original test data, and calculating the distance between the point where each original test data is located and the center point of each cluster;
determining a cluster central point which is closest to the point where each original test data is located;
and classifying each original test data into the corresponding nearest cluster center point to form a cluster with a set number.
The processor 803 is further configured to, when running the computer program, perform:
establishing a data feature set based on the original test data;
determining a measurement method for calculating the distance of each point of the original test data according to the type of the data feature set;
and determining the distance between the points of the original test data based on the distance measurement method.
The processor 803 is further configured to, when running the computer program, perform:
comparing an initial test result of the interface test according to the final test data with a set test result to obtain a comparison result;
and determining a final test result according to the comparison result.
It will be appreciated that the memory 802 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous SDRAM (ESDRAM), Sync Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 802 of the methodologies described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 803. The Processor 803 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules, i.e. the computer executable instructions, which are executed by the processor to implement the steps of the above-described interface testing method, may be located in a storage medium, such as a ram, a flash memory, a rom, a prom, an eprom, an eeprom, a register, an optical disc, etc., which are well known in the art. The storage medium can be located in the memory 802, and the processor 803 reads the information in the memory 802, and combines the hardware to perform the steps of the above-described method.
The description of the embodiments of the apparatus of the present application is similar to the description of the embodiments of the method described above, and has similar advantageous effects to the embodiments of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Of course, the apparatus in the embodiment of the present application may have other similar protocol interaction implementation cases, and those skilled in the art can make various corresponding changes and modifications according to the embodiment of the present application without departing from the spirit and the spirit of the present application, but these corresponding changes and modifications should fall within the scope of the claims appended to the method of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the word "comprising" or "comprises", when used in this specification, does not exclude the presence of other elements, components, methods, articles or apparatus, or steps, which do not exclude the presence of other elements, components, methods, articles or apparatus.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the modules is only one logical functional division, and there may be other division ways in actual implementation, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or other.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules; the network module can be located in one place or distributed on a plurality of network modules; some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional modules in the embodiments of the present application may be integrated into one processing module, or each module may be separately used as one module, or two or more modules may be integrated into one module; the integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated module described above in the present application may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a server to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An interface testing method, the method comprising:
collecting production environment data;
performing cluster analysis on the production environment data to obtain a final result of the cluster analysis;
determining final test data from the production environment data according to a final result of the cluster analysis;
and testing the interface according to the final test data to obtain a final test result.
2. The method of claim 1, wherein the performing cluster analysis on the production environment data to obtain a final result of cluster analysis comprises:
performing cluster analysis on the production environment data based on a first mathematical model to obtain a first result of the cluster analysis;
and performing cluster analysis on the first result based on a second mathematical model to obtain a final result of the cluster analysis.
3. The method of claim 2,
the clustering analysis is performed on the production environment data based on the first mathematical model to obtain a first result of the clustering analysis, and the method comprises the following steps:
determining the production environment data as original test data;
performing cluster analysis on the original test data based on a division PAM algorithm around a central point to obtain a first result of the cluster analysis;
correspondingly, the performing cluster analysis on the first result based on the second mathematical model to obtain a final result of the cluster analysis includes:
and performing clustering analysis on the first result based on a hierarchical clustering algorithm to obtain a final result of the clustering analysis.
4. The method of claim 3, wherein said performing a clustering analysis on said raw test data based on a PAM algorithm to obtain a first result of the clustering analysis comprises:
forming a set number of clusters based on the distance of the points of the original test data, wherein the clusters are a set of similar original test data;
determining the number of clusters according to the contour coefficient of each cluster;
selecting a cluster with the best performance based on the cluster number;
and determining the cluster with the best performance as a first result of cluster analysis.
5. The method of claim 4, wherein forming a set number of clusters based on the distance between the points of each original test data comprises:
determining the points where the set number of original test data are located as cluster center points;
traversing all the original test data, and calculating the distance between the point where each original test data is located and the center point of each cluster;
determining a cluster central point which is closest to the point where each original test data is located;
and classifying each original test data into the corresponding nearest cluster center point to form a cluster with a set number.
6. The method according to any one of claims 1 to 5, further comprising:
establishing a data feature set based on the original test data;
determining a measurement method for calculating the distance of each point of the original test data according to the type of the data feature set;
and determining the distance between the points of the original test data based on the distance measurement method.
7. The method according to any one of claims 1 to 5, wherein the testing the interface according to the final test data to obtain a final test result comprises:
comparing an initial test result of the interface test according to the final test data with a set test result to obtain a comparison result;
and determining a final test result according to the comparison result.
8. An interface testing apparatus, the apparatus comprising:
the acquisition unit is used for acquiring production environment data;
the analysis unit is used for carrying out cluster analysis on the production environment data to obtain a final result of the cluster analysis;
a first determining unit, configured to determine final test data from the production environment data according to a final result of the cluster analysis;
and the second determining unit is used for testing the interface according to the final test data to obtain a final test result.
9. An interface test apparatus, characterized in that the interface test apparatus comprises at least: a processor and a memory for storing executable instructions operable on the processor, wherein:
the processor is configured to execute the executable instructions, when the executable instructions are executed, to perform the interface testing method provided by any one of the preceding claims 1 to 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the interface testing method provided in any one of claims 1 to 7.
CN201910886571.2A 2019-09-19 2019-09-19 Interface testing method, device and storage medium Active CN112527573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910886571.2A CN112527573B (en) 2019-09-19 2019-09-19 Interface testing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910886571.2A CN112527573B (en) 2019-09-19 2019-09-19 Interface testing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112527573A true CN112527573A (en) 2021-03-19
CN112527573B CN112527573B (en) 2023-04-07

Family

ID=74974144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910886571.2A Active CN112527573B (en) 2019-09-19 2019-09-19 Interface testing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112527573B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114860575A (en) * 2022-03-31 2022-08-05 中国电信股份有限公司 Test data generation method and device, storage medium and electronic equipment
CN115277165A (en) * 2022-07-22 2022-11-01 江苏智能网联汽车创新中心有限公司 Vehicle network risk determination method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063374A (en) * 2011-01-07 2011-05-18 南京大学 Method for selecting regression test case for clustering with semi-supervised information
US20160004765A1 (en) * 2014-07-07 2016-01-07 Edward-Robert Tyercha Predictive Cluster Analytics Optimization
CN109062782A (en) * 2018-06-27 2018-12-21 阿里巴巴集团控股有限公司 A kind of selection method of regression test case, device and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063374A (en) * 2011-01-07 2011-05-18 南京大学 Method for selecting regression test case for clustering with semi-supervised information
US20160004765A1 (en) * 2014-07-07 2016-01-07 Edward-Robert Tyercha Predictive Cluster Analytics Optimization
CN109062782A (en) * 2018-06-27 2018-12-21 阿里巴巴集团控股有限公司 A kind of selection method of regression test case, device and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114860575A (en) * 2022-03-31 2022-08-05 中国电信股份有限公司 Test data generation method and device, storage medium and electronic equipment
CN114860575B (en) * 2022-03-31 2023-10-03 中国电信股份有限公司 Test data generation method and device, storage medium and electronic equipment
CN115277165A (en) * 2022-07-22 2022-11-01 江苏智能网联汽车创新中心有限公司 Vehicle network risk determination method, device, equipment and storage medium
CN115277165B (en) * 2022-07-22 2023-11-07 江苏智能网联汽车创新中心有限公司 Vehicle network risk determination method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112527573B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN107885656B (en) Automatic product algorithm testing method and application server
CN110688288A (en) Automatic testing method, device, equipment and storage medium based on artificial intelligence
CN112052172B (en) Rapid test method and device for third-party channel and electronic equipment
CN112527573B (en) Interface testing method, device and storage medium
EP4075281A1 (en) Ann-based program test method and test system, and application
CN111679979A (en) Destructive testing method and device
CN108681505B (en) Test case ordering method and device based on decision tree
CN114722746A (en) Chip aided design method, device, equipment and readable medium
CN113570330A (en) System and method for evaluating simulation training effect of emergency environment
CN112631920A (en) Test method, test device, electronic equipment and readable storage medium
CN113791980B (en) Conversion analysis method, device and equipment for test cases and storage medium
CN116662186A (en) Log playback assertion method and device based on logistic regression and electronic equipment
CN113505283B (en) Screening method and system for test data
CN115827618A (en) Global data integration method and device
CN115576834A (en) Software test multiplexing method, system, terminal and medium for supporting fault recovery
CN113986762A (en) Test case generation method and device
CN111243647B (en) Flash memory programming parameter determination method and device, electronic equipment and storage medium
CN114490413A (en) Test data preparation method and device, storage medium and electronic equipment
CN111385342B (en) Internet of things industry identification method and device, electronic equipment and storage medium
CN107992287B (en) Method and device for checking system demand priority ranking result
CN113434408B (en) Unit test case sequencing method based on test prediction
CN115589407B (en) File transmission verification method based on PLM-DNC integrated system
CN108628750B (en) Test code processing method and device
CN114926154B (en) Protection switching method and system for multi-scene data identification
CN115114806B (en) Autonomous evolution simulation method for autonomous traffic system architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant