CN117112438A - Performance test data construction method and device - Google Patents

Performance test data construction method and device Download PDF

Info

Publication number
CN117112438A
CN117112438A CN202311164930.6A CN202311164930A CN117112438A CN 117112438 A CN117112438 A CN 117112438A CN 202311164930 A CN202311164930 A CN 202311164930A CN 117112438 A CN117112438 A CN 117112438A
Authority
CN
China
Prior art keywords
vector
performance test
training
coding
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311164930.6A
Other languages
Chinese (zh)
Inventor
钱磊
时晖
柯桂强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xince Technology Co ltd
Original Assignee
Hangzhou Xince Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xince Technology Co ltd filed Critical Hangzhou Xince Technology Co ltd
Priority to CN202311164930.6A priority Critical patent/CN117112438A/en
Publication of CN117112438A publication Critical patent/CN117112438A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a performance test data construction method and device, which are used for generating performance test data meeting test requirements by acquiring key information from the performance test requirements and converting, combining and expanding original data based on the key information.

Description

Performance test data construction method and device
Technical Field
The present application relates to the field of software testing, and more particularly, to a performance test data construction method and apparatus.
Background
The advent and rapid growth of the internet, especially the large-scale use of mobile internet, internet of things devices, has the origin of data not limited to human-machine sessions, but rather a large number of automatically generated by devices, servers, APP applications, etc., machine-generated data is growing in geometric orders of magnitude. For software testing, data quality is an important dimension of computer software system testing, and it is a great challenge to efficiently and correctly verify at least terabytes of data processed by a computer software system.
In the software testing process, the more the test data input into the system is matched with the data characteristics in the real scene, the more accurate the obtained test result is. However, a large amount of real data is not easily available for privacy protection and the like, and thus a test data construction method is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a performance test data construction method and device, which are used for generating performance test data meeting test requirements by acquiring key information from the performance test requirements and converting, combining and expanding original data based on the key information.
According to an aspect of the present application, there is provided a performance test data construction method, comprising:
acquiring performance test requirements;
performing semantic coding on the performance test requirements to obtain performance test requirement semantic understanding vectors;
based on the performance test requirement semantic understanding vector, generating metadata;
extracting raw data from a data source; and
and converting, combining and expanding the original data according to the generated metadata to obtain performance test data.
According to another aspect of the present application, there is provided a performance test data construction apparatus comprising:
the requirement acquisition module is used for acquiring performance test requirements;
the semantic coding module is used for carrying out semantic coding on the performance test requirements to obtain semantic understanding vectors of the performance test requirements;
the metadata generation module is used for generating metadata based on the performance test requirement semantic understanding vector;
the original data extraction module is used for extracting original data from a data source; and
and the performance test data generation module is used for converting, combining and expanding the original data according to the generation metadata to obtain performance test data.
Compared with the prior art, the method and the device for constructing the performance test data, provided by the application, have the advantages that the key information is acquired from the performance test requirements, and the original data are converted, combined and expanded based on the key information, so that the performance test data meeting the test requirements are generated.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flow chart of a method of constructing performance test data according to an embodiment of the present application;
FIG. 2 is a system architecture diagram of a method of constructing performance test data according to an embodiment of the present application;
FIG. 3 is a flow chart of a training phase of a performance test data construction method according to an embodiment of the present application;
FIG. 4 is a flowchart of sub-step S2 of a performance test data construction method according to an embodiment of the present application;
fig. 5 is a block diagram of a performance test data constructing apparatus according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
In the software testing process, the more the test data input into the system is matched with the data characteristics in the real scene, the more accurate the obtained test result is. However, a large amount of real data is not easily available for privacy protection and the like, and thus a test data construction method is desired.
In the technical scheme of the application, a method for constructing performance test data is provided. FIG. 1 is a flow chart of a method of constructing performance test data according to an embodiment of the present application. Fig. 2 is a system architecture diagram of a performance test data construction method according to an embodiment of the present application. As shown in fig. 1 and 2, the performance test data construction method according to an embodiment of the present application includes the steps of: s1, acquiring performance test requirements; s2, carrying out semantic coding on the performance test requirements to obtain semantic understanding vectors of the performance test requirements; s3, generating metadata based on the performance test requirement semantic understanding vector; s4, extracting original data from a data source; and S5, converting, combining and expanding the original data according to the generated metadata to obtain performance test data.
In particular, in step S1, performance test requirements are obtained. In the technical scheme of the application, the performance test requirements comprise the scale, structure and content of the test data, and the targets, scenes, indexes and loads of the performance test. It should be appreciated that the size, structure and content of the test data, as well as the goals, scenarios, metrics and loads of the performance test, are obtained to ensure that the generated performance test data accurately simulates a real scenario and meets the requirements and goals of the test. More specifically, the size of the test data determines the coverage of the test, i.e., the number and diversity of the test data. The structure and content describe the organization of the data and the information contained. By acquiring the scale, structure and content of the test data, the generated test data can be ensured to accurately reflect the data characteristics in the real scene. The targets, scenes, indexes and loads of the performance test can provide important information reference sources for generating test data which can simulate real scenes more, so that the generated performance test data is representative and practical, and the requirements and targets of the test are met. That is, based on these requirements, information such as the amount, type, distribution, and characteristics of the performance test data can be determined.
In particular, in step S2, the performance test requirements are semantically encoded to obtain a performance test requirements semantic understanding vector. In particular, in one specific example of the present application, as shown in fig. 4, the S2 includes: s21, carrying out data structuring processing on the performance test requirements to obtain a scale coding vector, a structure coding vector, a content coding vector, a target coding vector, a scene coding vector, an index coding vector and a load coding vector; and S22, extracting semantic association features among the scale code vector, the structure code vector, the content code vector, the target code vector, the scene code vector, the index code vector and the load code vector to obtain the performance test requirement semantic understanding vector.
Specifically, in S21, the data structuring process is performed on the performance test requirement to obtain a scale encoding vector, a structure encoding vector, a content encoding vector, a target encoding vector, a scene encoding vector, an index encoding vector, and a load encoding vector. It should be appreciated that encoding the scale information in the performance test requirements may help a tester determine the size of the test, e.g., number of concurrent users, amount of data, etc. By encoding the scale information into a vector form, the scale adjustment and comparison can be more conveniently carried out to obtain a scale encoding vector; coding the system structure information in the performance test requirements can help testers understand the relationship among the components and modules of the system, the data flow and the like. The structural coding vector can help a tester to design a test scheme, and determine the focus and the range of the test so as to obtain the structural coding vector; the service scene or the functional information in the performance test requirement is encoded, so that testers can be helped to understand the functional characteristics and the service flow of the system. Through the content coding vector, a tester can design a representative test case to cover different functions and service scenes of the system so as to evaluate the performance of the system under different conditions and obtain the content coding vector; encoding the test targets in the performance test requirements can help the tester to determine the targets and expected performance indexes. The target coding vector can help a tester to formulate an evaluation standard of a performance test so as to determine whether the system meets the performance requirement or not to obtain the target coding vector; the test scene information in the performance test requirements is encoded, so that a tester can be helped to determine the test environment and conditions, and the tester can simulate the real use environment through the scene coding vector so as to evaluate the performance of the system under different scenes and obtain the scene coding vector; the performance indexes in the performance test requirements are coded, so that a tester can be helped to determine the performance indexes to be concerned, and the index coding vector can be helped to design a measurement method and a data collection mode of the performance test by the tester, so that the performance of the system is quantitatively evaluated to obtain the index coding vector; encoding load information in performance test requirements can help a tester determine load patterns and load types at the time of testing, such as normal load, peak load, stress test, and the like. Through the load coding vector, a tester can simulate the real load condition so as to evaluate the performance and stability of the system under different loads to obtain the load coding vector.
Notably, the data structuring process refers to converting unstructured or semi-structured data into a structured form for better storage, processing, and analysis. The concept of data structuring processing includes the following: and (3) data extraction: useful information is extracted from unstructured or semi-structured data sources. This may involve techniques of text parsing, image processing, audio transcription, etc., to convert the data into a processable form; data cleaning: the data is cleaned and preprocessed to remove noise, correct errors, fill missing values, etc. This includes operations such as data deduplication, data format conversion, missing value processing, etc., to ensure consistency and accuracy of the data; data conversion: unstructured or semi-structured data is converted into structured form, such as text data into tables, XML data into relational databases, and so forth. This generally involves techniques such as data model design, data normalization, and data mapping; data integration: structured data from different data sources is integrated and consolidated to create a unified data set. This may involve data merging, data correlation, data concatenation, etc. to eliminate duplication and redundancy and provide a consistent view of the data; and (3) data storage: the structured data is stored in a suitable data storage system, such as a relational database, data warehouse, noSQL database, etc. This requires selecting an appropriate data model, designing a database table structure, and considering security and scalability of data.
Accordingly, in one possible implementation, the performance test requirements may be data structured to obtain scale, structure, content, object, scene, index, and load coding vectors, for example, by: scale encoding vector: extracting scale-related information from the performance test requirements; normalizing and normalizing the scale information to encode it into a vector form; the scale information may be converted into scale encoded vectors using different encoding schemes, such as one-hot encoding or numerical encoding; structural coding vector: extracting structural information of the system from the performance test requirements; abstracting and representing the structural information so as to encode the structural information into a vector form; the system structure can be represented by a graph structure or a hierarchical structure and the like, and is converted into a structure coding vector; content encoding vector: extracting information related to a service scene or a function from performance test requirements; abstracting and representing the business scenes or the function information so as to encode the business scenes or the function information into vector forms; content encoding can be performed using text embedding techniques or domain expertise to convert business scenarios or functions into content encoding vectors; target coding vector: defining a tested target and an expected performance index from the performance test requirements; normalizing and normalizing the targets and indices to encode them into a vector form; the target and the index can be converted into a target coding vector by using numerical coding, threshold coding and other modes; scene coding vector: extracting information related to test environments and conditions from performance test requirements; abstracting and representing the test environment and condition information so as to encode the test environment and condition information into a vector form; the test environment and conditions can be converted into scene coding vectors by using classification coding, feature coding and other modes; index encoding vector: extracting performance indexes to be concerned from performance test requirements; normalizing and normalizing the performance metrics to encode them into a vector form; the performance index can be converted into an index coding vector by using numerical coding, threshold coding and other modes; load coding vector: extracting information related to load modes and load types from performance test requirements; abstracting and representing the load information so as to encode the load information into a vector form; the load information may be converted into load encoding vectors using classification encoding or feature encoding, etc.
Specifically, the step S22 extracts semantic association features among the scale code vector, the structure code vector, the content code vector, the target code vector, the scene code vector, the index code vector, and the load code vector to obtain the performance test requirement semantic understanding vector. In particular, in one specific example of the present application, the scale encoding vector, the structure encoding vector, the content encoding vector, the target encoding vector, the scene encoding vector, the index encoding vector, and the load encoding vector are passed through a semantic association encoder based on a converter module to obtain the performance test requirement semantic understanding vector. It should be understood that the semantic association encoder based on the converter module may perform semantic association and integration on the scale encoding vector, the structure encoding vector, the content encoding vector, the target encoding vector, the scene encoding vector, the index encoding vector and the load encoding vector to extract association information between them and the upper and lower Wen Yuyi, thereby obtaining the performance test requirement semantic understanding vector.
Notably, a converter-based semantic association encoder refers to the use of a neural network architecture based on converter modules to enable semantic association modeling and integration of encoded vectors. Wherein the converter module is generally referred to as a transducer model.
Accordingly, in one possible implementation, the scale encoding vector, the structure encoding vector, the content encoding vector, the target encoding vector, the scene encoding vector, the index encoding vector, and the load encoding vector may be passed through a semantic association encoder based on a converter module to obtain the performance test requirement semantic understanding vector, for example: preparing scale code vectors, structure code vectors, content code vectors, object code vectors, scene code vectors, index code vectors and load code vectors as input data; a semantic association encoder is constructed using a neural network architecture based on a converter module. The encoder is formed by stacking a plurality of encoder layers, and each encoder layer comprises a multi-head self-attention mechanism and a feedforward neural network; the scale coding vector, the structure coding vector, the content coding vector, the target coding vector, the scene coding vector, the index coding vector and the load coding vector are taken as inputs and input into a semantic association encoder; the semantic association encoder performs semantic association modeling on the input encoding vector through a self-attention mechanism. The self-attention mechanism calculates the association weight between each position and other positions, so that global semantic association is captured; the semantic association encoder gradually extracts higher-level semantic information through stacking of the multi-layer encoders, and performs feature interaction and enhancement. This enables the encoded vectors to better express the association and dependency between them; and (3) processing by a semantic association encoder to obtain unified coding vector representation, namely a semantic understanding vector of the performance test requirement. The vector contains semantic understanding and associated information on scale, structure, content, object, scene, index, load, etc.
It should be noted that, in other specific examples of the present application, the semantic association features among the scale code vector, the structure code vector, the content code vector, the target code vector, the scene code vector, the index code vector, and the load code vector may be extracted in other manners to obtain the performance test requirement semantic understanding vector, for example: preparing scale code vectors, structure code vectors, content code vectors, object code vectors, scene code vectors, index code vectors and load code vectors as input data; a semantic association encoder is constructed using a neural network architecture based on a converter module. The encoder is formed by stacking a plurality of encoder layers, and each encoder layer comprises a multi-head self-attention mechanism and a feedforward neural network; the scale coding vector, the structure coding vector, the content coding vector, the target coding vector, the scene coding vector, the index coding vector and the load coding vector are taken as inputs and input into a semantic association encoder; the semantic association encoder performs semantic association modeling on the input encoding vector through a self-attention mechanism. The self-attention mechanism calculates the association weight between each position and other positions, so that global semantic association is captured; by observing interactions between the self-attention weights of different layers in the semantic association encoder and the encoded vectors, semantic association features between the encoded vectors can be extracted. These characteristics may include information on strength of association, type of association, direction of association, etc.; on the basis of extracting semantic association features, context semantic integration is carried out on the coding vectors so as to comprehensively consider association information among the coding vectors. This may be achieved by weighted fusion, stitching or other integration of the encoded vectors; and obtaining the semantic understanding vector of the performance test requirement through processing of the semantic association encoder and context semantic integration. The vector contains semantic understanding and associated information on scale, structure, content, object, scene, index, load, etc.
It should be noted that, in other specific examples of the present application, the performance test requirements may also be semantically encoded by other manners to obtain a performance test requirement semantic understanding vector, for example: the performance test requirements are arranged into a clear document or list; preprocessing a performance test requirement text, including removing stop words, punctuation marks and special characters, performing word drying or morphological reduction and the like; converting each Word into a vector representation using a Word embedding model (e.g., word2Vec, gloVe, or BERT); for words that are not in the pre-trained word embedding model, the representation may be performed using random vectors or special labels; averaging or weighted averaging word vectors in each performance test requirement text to obtain a vector representation of the sentence; sentence vectors can be calculated using simple average pooling or more complex weighted pooling methods; encoding the sentence vectors using pre-trained semantic encoding models (e.g., BERT, GPT, etc.) to obtain richer semantic representations; inputting sentence vectors into a semantic coding model to obtain coded semantic understanding vectors; and aggregating semantic understanding vectors of all the performance test requirement texts, and obtaining the whole performance test requirement semantic understanding vector by using methods such as average, maximum or splicing.
In particular, in step S3, generating metadata is obtained based on the performance test requirement semantic understanding vector. Wherein the metadata is a description of the performance test data including the name, source, format, rules, relationships, etc. of the performance test data. That is, the metadata of the performance test data is information describing the data structure and attributes of the performance test data. It defines fields of performance test data, data types, length limits, value ranges, and other relevant attributes. The metadata provides a description of the structure and content of the performance test data so that the data generation and processing can be performed according to predetermined rules. In a specific example of the present application, based on the performance test requirement semantic understanding vector, the implementation process of generating metadata is as follows: and passing the performance test requirement semantic understanding vector through an AIGC-based metadata generator to obtain generated metadata.
It should be noted that, in other specific examples of the present application, the metadata may be generated based on the performance test requirement semantic understanding vector in other ways, for example: firstly, acquiring a generated semantic understanding vector of a performance test requirement; decoding the performance test requirement semantic understanding vector into a readable text form by using a flyback converter module; and analyzing the decoded text, and extracting key information and metadata in the decoded text. This may be achieved by text processing techniques such as Natural Language Processing (NLP) and information extraction techniques; extracting metadata related to the scale, such as the number of users, the number of concurrent requests, the data volume and the like, from the parsed text; extracting metadata related to the structure, such as system architecture, component structure, module division and the like, from the parsed text; extracting metadata related to the content, such as data type, data format, data source, etc., from the parsed text; extracting metadata related to the target from the parsed text, such as performance target, response time requirement, throughput requirement and the like; extracting scene-related metadata from the parsed text, such as test environment, network conditions, user behavior patterns, and the like; extracting metadata related to the index, such as performance index, measurement standard, evaluation method and the like, from the parsed text; extracting metadata related to the load, such as a load model, a load generating tool, a load parameter and the like, from the parsed text; and combining the extracted metadata to form complete performance test requirement metadata.
In particular, in step S4 and step S5, raw data is extracted from a data source; and converting, combining and expanding the original data according to the generated metadata to obtain performance test data. In the technical scheme of the application, the data source can be a database, a file, an interface or other systems, and according to the source information in the generated metadata, the original data can be acquired from the data source by using SQL sentences, file read-write operation, interface calling or other methods. In particular, conversion refers to formatting, encrypting, desensitizing and other operations on the original data to make the original data conform to the format and rules in the metadata generation; combining means that the original data with different sources or types are spliced or nested according to the relation in the generated metadata; the expansion refers to copying, changing or interpolating the original data according to the distribution and the characteristics in the generated metadata, so that the original data reach the quantity and the type of the performance test data.
Accordingly, in one possible implementation, raw data may be extracted from a data source by the steps of; and converting, combining and expanding the original data according to the generated metadata to obtain performance test data, for example: first determining a data source for performance testing; raw data is extracted from the selected data sources. This may be done by reading a file, querying a database, calling an API interface, etc.; cleaning and preprocessing the extracted original data to remove invalid data, process missing values, perform data format conversion and the like, and ensure the accuracy and consistency of the data; and converting the original data according to the generated performance test requirement metadata. According to scale metadata, structure metadata, content metadata and the like in the metadata, screening, filtering, sorting and the like are performed on the original data so as to meet testing requirements; and combining and expanding the converted data according to the generated performance test requirement metadata. According to target metadata, scene metadata, index metadata, load metadata and the like in the metadata, combining data from different sources to generate a richer test data set; and generating and simulating the data according to the generated performance test requirement metadata. For example, according to the number of users, the number of concurrent requests and the like in the metadata, generating simulated user behavior and request data so as to simulate a real load condition; and verifying and calibrating the generated performance test data, and ensuring the accuracy and rationality of the data. Data verification and calibration may be performed using data verification tools, statistical analysis methods, and the like; and storing and managing the generated performance test data for subsequent test use.
It should be appreciated that training of the converter module-based semantically-related encoder and the AIGC-based metadata generator is required prior to inference using the neural network model described above. That is, in the performance test data construction method of the present application, a training phase for training the semantic association encoder based on the converter module and the AIGC-based metadata generator is further included.
FIG. 3 is a flow chart of a training phase of a performance test data construction method according to an embodiment of the present application. As shown in fig. 3, a performance test data construction method according to an embodiment of the present application includes: a training phase comprising: s110, training data is obtained, wherein the training data comprises training performance test requirements, and a true value of metadata is generated; s120, carrying out data structuring processing on the training performance test requirement to obtain a training scale coding vector, a training structure coding vector, a training content coding vector, a training target coding vector, a training scene coding vector, a training index coding vector and a training load coding vector; s130, passing the training scale coding vector, the training structure coding vector, the training content coding vector, the training target coding vector, the training scene coding vector, the training index coding vector and the training load coding vector through a semantic association coder based on a converter module to obtain a semantic understanding vector required by the training performance test; s140, performing feature distribution optimization on the training performance test requirement semantic understanding vector to obtain an optimized training performance test requirement semantic understanding vector; s150, passing the semantic understanding vector required by the optimization training performance test through the metadata generator based on AIGC to obtain training generation metadata; s160, calculating a cross entropy function value between the training generation metadata and a true value of the generation metadata to obtain a loss function value; and S170, training the semantic association encoder based on the converter module and the metadata generator based on AIGC with the loss function value.
In particular, in the technical solution of the present application, after the scale encoding vector, the structure encoding vector, the content encoding vector, the target encoding vector, the scene encoding vector, the index encoding vector, and the load encoding vector are passed through the semantic association encoder based on the converter module, the performance test requirement semantic understanding vector may also have semantic feature representations corresponding to the local feature distribution scales of the respective initial encoding vectors, so that when the performance test requirement semantic understanding vector is passed through the metadata generator based on the AIGC, the scale heuristic distribution probability density mapping is performed based on the local feature distribution scales, so as to obtain the generated metadata, but, considering that when the semantic association encoding is performed based on the converter module, the performance test requirement semantic understanding vector may also include intra-vector encoding semantic features and inter-vector encoding semantic features, that is, may include hybrid encoding semantic feature representations under the local feature distribution scales, which may reduce training efficiency of the metadata generator based on the AIGC. Based on the above, in the training process, when the applicant of the present application generates metadata from the performance test requirement semantic understanding vector through the metadata generator based on the AIGC, the semantic information homogenizing activation of the feature rank expression is performed on the performance test requirement semantic understanding vector, which is specifically expressed as follows:wherein->Is the performance test requirement semantic understanding vector, < ->Is the +.f. of the semantic understanding vector of the performance test requirements>Personal characteristic value->Two norms representing the semantic understanding vector of the performance test requirements,/>Is a logarithm based on 2, and +.>Is a weight superparameter,/->Is the +.o of the semantic understanding vector of the optimization training performance test requirement>And characteristic values. Here, the semantic understanding vector +.>Feature distribution mapping of the feature distribution in the process of generating a probability density mapping space from a high-dimensional feature space, different mapping modes are presented on different feature distribution levels based on mixed coding semantic features, so that optimal efficiency cannot be obtained based on a scale heuristic mapping strategy, and therefore, rank expression semantic information based on feature vector norms is uniform instead of scale feature matching, similar feature rank expressions can be activated in a similar manner, and the correlation between feature rank expressions with large differences is reduced, so that semantic understanding vectors are improved, and the performance test requirement semantic understanding vectors are solved>The problem that the probability expression mapping efficiency of the feature distribution under different spatial rank expressions is low is solved, and the training efficiency of the metadata generator based on AIGC is improved.
In summary, the method for constructing performance test data according to the embodiment of the present application is explained, which generates performance test data meeting the test requirements by acquiring key information from the performance test requirements and converting, combining and expanding the original data based on the key information.
Further, a performance test data construction device is also provided.
Fig. 5 is a block diagram of a performance test data constructing apparatus according to an embodiment of the present application. As shown in fig. 5, the performance test data constructing apparatus 300 according to an embodiment of the present application includes: a requirement acquisition module 310, configured to acquire a performance test requirement; the semantic coding module 320 is configured to perform semantic coding on the performance test requirement to obtain a performance test requirement semantic understanding vector; the metadata generation module 330 is configured to generate metadata based on the performance test requirement semantic understanding vector; an original data extraction module 340, configured to extract original data from a data source; and a performance test data generating module 350, configured to convert, combine and expand the original data according to the generated metadata to obtain performance test data.
As described above, the performance test data constructing apparatus 300 according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like having a performance test data constructing method. In one possible implementation, the performance test data construction apparatus 300 according to an embodiment of the present application may be integrated into the wireless terminal as one software module and/or hardware module. For example, the performance test data construction apparatus 300 may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the performance test data construction apparatus 300 may also be one of a number of hardware modules of the wireless terminal.
Alternatively, in another example, the performance test data construction apparatus 300 and the wireless terminal may be separate devices, and the performance test data construction apparatus 300 may be connected to the wireless terminal through a wired and/or wireless network and transmit the interactive information in a agreed data format.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. A method of constructing performance test data, comprising:
acquiring performance test requirements;
performing semantic coding on the performance test requirements to obtain performance test requirement semantic understanding vectors;
based on the performance test requirement semantic understanding vector, generating metadata;
extracting raw data from a data source; and
and converting, combining and expanding the original data according to the generated metadata to obtain performance test data.
2. The method of claim 1, wherein the performance test requirements include the size, structure, and content of the test data, and the goals, scenarios, metrics, and loads of the performance test.
3. The method of claim 2, wherein semantically encoding the performance test requirements to obtain a performance test requirements semantic understanding vector comprises:
carrying out data structuring processing on the performance test requirements to obtain scale coding vectors, structure coding vectors, content coding vectors, target coding vectors, scene coding vectors, index coding vectors and load coding vectors; and
and extracting semantic association features among the scale coding vector, the structure coding vector, the content coding vector, the target coding vector, the scene coding vector, the index coding vector and the load coding vector to obtain the performance test requirement semantic understanding vector.
4. The method of claim 3, wherein extracting semantic association features between the scale code vector, the structure code vector, the content code vector, the object code vector, the scene code vector, the index code vector, and the load code vector to obtain the performance test requirement semantic understanding vector comprises:
and enabling the scale coding vector, the structure coding vector, the content coding vector, the target coding vector, the scene coding vector, the index coding vector and the load coding vector to pass through a semantic association coder based on a converter module to obtain the performance test requirement semantic understanding vector.
5. The method of claim 4, wherein generating metadata based on the performance test requirement semantic understanding vector comprises:
and passing the performance test requirement semantic understanding vector through an AIGC-based metadata generator to obtain the generated metadata.
6. The method of claim 5, further comprising the step of training: training the semantic association encoder based on the converter module and the metadata generator based on the AIGC;
wherein the training step comprises:
acquiring training data, wherein the training data comprises training performance test requirements, and generating a true value of metadata;
carrying out data structuring processing on the training performance test requirements to obtain training scale coding vectors, training structure coding vectors, training content coding vectors, training target coding vectors, training scene coding vectors, training index coding vectors and training load coding vectors;
the training scale coding vector, the training structure coding vector, the training content coding vector, the training target coding vector, the training scene coding vector, the training index coding vector and the training load coding vector pass through a semantic association coder based on a converter module to obtain a semantic understanding vector required by the training performance test;
performing feature distribution optimization on the training performance test requirement semantic understanding vector to obtain an optimized training performance test requirement semantic understanding vector;
passing the optimized training performance test requirement semantic understanding vector through the AIGC-based metadata generator to obtain training generation metadata;
calculating a cross entropy function value between the training generation metadata and a true value of the generation metadata to obtain a loss function value; and
training the converter module based semantically associated encoder and the AIGC based metadata generator with the loss function values.
7. The method of claim 6, wherein performing feature distribution optimization on the training performance test requirement semantic understanding vector to obtain an optimized training performance test requirement semantic understanding vector, comprises: carrying out feature distribution optimization on the training performance test requirement semantic understanding vector by using the following optimization formula to obtain an optimized training performance test requirement semantic understanding vector;
wherein, the formula is:wherein->Is the performance test requirement semantic understanding vector, < ->Is the +.f. of the semantic understanding vector of the performance test requirements>Personal characteristic value->Two norms representing the semantic understanding vector of the performance test requirements,/>Is a logarithm based on 2, and +.>Is a weight superparameter,/->Is the +.o of the semantic understanding vector of the optimization training performance test requirement>And characteristic values.
8. A performance test data construction apparatus, comprising:
the requirement acquisition module is used for acquiring performance test requirements;
the semantic coding module is used for carrying out semantic coding on the performance test requirements to obtain semantic understanding vectors of the performance test requirements;
the metadata generation module is used for generating metadata based on the performance test requirement semantic understanding vector;
the original data extraction module is used for extracting original data from a data source; and
and the performance test data generation module is used for converting, combining and expanding the original data according to the generation metadata to obtain performance test data.
CN202311164930.6A 2023-09-11 2023-09-11 Performance test data construction method and device Pending CN117112438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311164930.6A CN117112438A (en) 2023-09-11 2023-09-11 Performance test data construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311164930.6A CN117112438A (en) 2023-09-11 2023-09-11 Performance test data construction method and device

Publications (1)

Publication Number Publication Date
CN117112438A true CN117112438A (en) 2023-11-24

Family

ID=88796348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311164930.6A Pending CN117112438A (en) 2023-09-11 2023-09-11 Performance test data construction method and device

Country Status (1)

Country Link
CN (1) CN117112438A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435505A (en) * 2023-12-04 2024-01-23 南京易迪森信息技术有限公司 Visual generation method of performance test script
CN117707986A (en) * 2024-02-05 2024-03-15 钦原科技有限公司 Software power consumption testing method and system for mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117435505A (en) * 2023-12-04 2024-01-23 南京易迪森信息技术有限公司 Visual generation method of performance test script
CN117435505B (en) * 2023-12-04 2024-03-15 南京易迪森信息技术有限公司 Visual generation method of performance test script
CN117707986A (en) * 2024-02-05 2024-03-15 钦原科技有限公司 Software power consumption testing method and system for mobile terminal

Similar Documents

Publication Publication Date Title
CN117112438A (en) Performance test data construction method and device
CN110750640B (en) Text data classification method and device based on neural network model and storage medium
Morris et al. Specification of exponential-family random graph models: terms and computational aspects
CN111931517B (en) Text translation method, device, electronic equipment and storage medium
CN108733682B (en) Method and device for generating multi-document abstract
CN112100401B (en) Knowledge graph construction method, device, equipment and storage medium for science and technology services
KR101877161B1 (en) Method for context-aware recommendation by considering contextual information of document and apparatus for the same
Zhu et al. Quality model and metrics of ontology for semantic descriptions of web services
KR101717230B1 (en) Document summarization method using recursive autoencoder based sentence vector modeling and document summarization system
CN112734881A (en) Text synthesis image method and system based on significance scene graph analysis
CN115374270A (en) Legal text abstract generation method based on graph neural network
CN116028098A (en) Software management system and method for nonstandard enterprises
CN114722833A (en) Semantic classification method and device
CN115114937A (en) Text acquisition method and device, computer equipment and storage medium
CN114330483A (en) Data processing method, model training method, device, equipment and storage medium
CN112287119B (en) Knowledge graph generation method for extracting relevant information of online resources
Michael et al. A First Experimental Demonstration of Massive Knowledge Infusion.
RU2008104155A (en) METHOD AND SYSTEM OF ORGANIZATION AND FUNCTIONING OF A DATABASE OF REGULATORY DOCUMENTATION
CN116881971A (en) Sensitive information leakage detection method, device and storage medium
CN114325384A (en) Crowdsourcing acquisition system and method based on motor fault knowledge
CN113836308B (en) Network big data long text multi-label classification method, system, device and medium
CN113157892A (en) User intention processing method and device, computer equipment and storage medium
CN110633363B (en) Text entity recommendation method based on NLP and fuzzy multi-criterion decision
Wu et al. Application in Computer Software Testing Based on Artificial Intelligence Technology
CN117851373B (en) Knowledge document hierarchical management method, storage medium and management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication