WO2021253904A1 - Procédé, appareil et dispositif de génération d'ensemble de cas de test, et support de stockage lisible par ordinateur - Google Patents

Procédé, appareil et dispositif de génération d'ensemble de cas de test, et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021253904A1
WO2021253904A1 PCT/CN2021/081873 CN2021081873W WO2021253904A1 WO 2021253904 A1 WO2021253904 A1 WO 2021253904A1 CN 2021081873 W CN2021081873 W CN 2021081873W WO 2021253904 A1 WO2021253904 A1 WO 2021253904A1
Authority
WO
WIPO (PCT)
Prior art keywords
test case
case set
training
test
knowledge base
Prior art date
Application number
PCT/CN2021/081873
Other languages
English (en)
Chinese (zh)
Inventor
袁文静
周杰
卢道和
方镇举
翁玉萍
陈文龙
黄涛
韩海燕
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2021253904A1 publication Critical patent/WO2021253904A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • This application relates to the technical field of financial technology (Fintech), and in particular to a method, device, device, and computer-readable storage medium for generating a test case set.
  • test cases usually include usage scenarios and their corresponding test results.
  • test cases are mainly written and maintained manually, which is inefficient and consumes a lot of manpower.
  • the existing automated case writing solutions need to save historical cases in the database first, and generate test case sets through database search and matching.
  • the database retrieval process can only retrieve existing cases, and the database cannot automatically produce new cases.
  • the input of historical cases in the database needs to be manually entered and the scope of case retrieval is very limited, which is inefficient and impossible Realize the automatic generation of test cases.
  • the main purpose of this application is to provide a test case generation method, device, equipment, and computer-readable storage medium, aiming to realize the automatic generation of test cases and improve the efficiency of test case generation.
  • the present application provides a method for generating a test case set, and the method for generating a test case set includes:
  • test case set knowledge base is generated by training a preset training model constructed by combining the BERT model and the knowledge graph;
  • the knowledge graph is used to analyze the case keywords and the target similar case set to inferentially generate a test case set.
  • the method before the step of obtaining case keywords, performing semantic analysis on the case keywords, and obtaining a semantic analysis result, the method further includes:
  • the first training test case set and the second training test case set are classified by the language representation model, and a test case set knowledge base is generated according to the classification result.
  • the step of performing preprocessing training on the preset training model according to the unlabeled first training test case set to obtain the initial training model includes:
  • the step of performing fine-tuning training on the initial training model according to the labeled second training test case set, and obtaining a trained language representation model includes:
  • the step of classifying the first training test case set and the second training test case set according to the language representation model, and generating a test case set knowledge base according to the classification result includes:
  • the case set generates a test case set knowledge base.
  • the step of retrieving the test case collection knowledge base to obtain the retrieval result includes:
  • the target test case set is retrieved according to the semantic analysis result to obtain the similarity between the case keywords and the test cases in the target test case set, and the retrieval result is obtained.
  • the step of retrieving the target test case set according to the semantic analysis result to obtain the similarity between the case keywords and the test cases in the target test case set, and obtaining the retrieval result further include:
  • the similarity is greater than or equal to the first preset threshold, it is determined that the same case set corresponding to the case keyword exists in the test case set knowledge base, and the target same case set is obtained and output.
  • the method further includes:
  • the similarity is less than the first preset threshold, detecting whether the similarity is greater than a second preset threshold, where the second preset threshold is less than the first preset threshold;
  • the step is performed: obtaining a target similar case set;
  • the method further includes:
  • test case set generating device includes:
  • the analysis module is used to obtain case keywords, perform semantic analysis on the case keywords, and obtain semantic analysis results;
  • the retrieval module is configured to retrieve the test case set knowledge base according to the semantic analysis result to obtain the retrieval result, wherein the test case set knowledge base is generated by training a preset training model constructed by combining the BERT model and the knowledge graph;
  • the first obtaining module is configured to obtain a target similar case set if it is determined that there is a similar case set in the test case set knowledge base according to the search result;
  • the first generating module is configured to analyze the case keywords and the target similar case set by using the knowledge graph to generate a test case set.
  • the present application also provides a test case set generating device, the test case set generating device including: a memory, a processor, and a test stored on the memory and running on the processor A case set generation program, when the test case set generation program is executed by the processor, the steps of the test case set generation method described above are implemented.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a test case set generation program, and the test case set generation program is executed by a processor to achieve the above The steps of the test case set generation method described.
  • This application provides a test case collection method, device, equipment, and computer readable storage medium to obtain case keywords, perform semantic analysis on the case keywords, and obtain semantic analysis results; and retrieve test cases based on the semantic analysis results
  • a knowledge base to obtain retrieval results wherein the test case set knowledge base is generated by the training of a preset training model constructed by the BERT model combined with the knowledge graph; if the test case set knowledge base is determined according to the retrieval result If there is a set of similar cases, the target similar case set is obtained; the case keywords and the target similar case set are analyzed using the knowledge graph to generate a test case set.
  • the existing test case set knowledge base can be used to search in the test case set knowledge base according to the case keywords to obtain the search results.
  • the knowledge graph is used to automatically Reasoning to generate a new set of test cases. Therefore, compared with the prior art, the present application can automatically generate a new test case set based on the existing test case set knowledge base, thereby improving the generation efficiency of the test case set.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in a solution of an embodiment of the application
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for generating a test case set of this application
  • FIG. 3 is a schematic diagram of the functional modules of the first embodiment of the apparatus for generating a test case set of this application.
  • FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the application.
  • the test case set generating device in the embodiment of the present application may be a smart phone, or a terminal device such as a PC (Personal Computer, personal computer), a tablet computer, and a portable computer.
  • a terminal device such as a PC (Personal Computer, personal computer), a tablet computer, and a portable computer.
  • the test case set generating device may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a Wi-Fi interface).
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • test case set generating device does not constitute a limitation on the test case set generating device, and may include more or less components than shown in the figure, or a combination of certain components, Or different component arrangements.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a test case set generation program.
  • the network interface 1004 is mainly used to connect to a back-end server and communicate with the back-end server;
  • the user interface 1003 is mainly used to connect to a client and communicate with the client;
  • the processor 1001 can be used to Call the test case set generation program stored in the memory 1005, and execute each step of the following test case set generation method.
  • This application provides a method for generating a test case set.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for generating a test case set of this application.
  • the method for generating a test case set includes:
  • Step S10 obtaining case keywords, performing semantic analysis on the case keywords, and obtaining semantic analysis results
  • test case set generating device which is equipped with a test case set generator.
  • the test case set is some test scenarios that may be used by developers and testers in the product development process and the corresponding expected test results.
  • it includes test information related to some abnormal scenarios, such as
  • the test case set can include different usage scenarios such as Chinese login name, English login name, and special character login name, as well as test results when tested in these different test scenarios.
  • the staff can input the case keywords of the test case set they want to generate through the corresponding software on the working end, such as "login", to trigger the test case set generation instruction.
  • the test case set When the generator receives the test case set generation instruction, it obtains the case keywords input by the staff, and then performs semantic analysis on the case keywords to use the semantic analysis to conduct context-sensitive examinations, thereby obtaining the corresponding semantic analysis results.
  • the semantic analysis method used in this application is a commonly used semantic analysis method, such as latent semantic analysis.
  • Step S20 According to the semantic analysis result, search the test case set knowledge base to obtain the search result, wherein the test case set knowledge base is generated by training a preset training model constructed by combining the BERT model and the knowledge graph;
  • test case set knowledge base is retrieved to obtain the retrieval result.
  • the test case set knowledge base contains all product-related case sets, and the test case set knowledge base is generated by training a preset training model constructed by combining the BERT model with the knowledge graph.
  • BERT Bidirectional Encoder Representations from Transformers, a natural language processing pre-training technology based on neural networks
  • Test cases require a large amount of test background knowledge support. What BERT learns is a text matching model. A large amount of test background common sense is implicit and vague, and it is difficult to reflect in the pre-training data. At the same time, it lacks semantic understanding and reasoning.
  • the knowledge map information is incorporated in the pre-training process to organize the knowledge under test.
  • the calculation model based on symbolic semantics can provide prior knowledge for BERT, so that it has certain test common sense and reasoning ability. Therefore, in this application, BERT is used in combination with the preset training model training of the knowledge graph to generate a test case set knowledge base, which can enable the test case set knowledge base to know different test common sense and use the test common sense inference to generate a new test case set.
  • the search result is the similarity between the case keywords and the existing cases in the test case set knowledge base after semantic analysis. According to the similarity, the search results generally include three different search results: identical, similar, and basically different.
  • Step S30 If it is determined according to the search result that there is a similar case set in the test case set knowledge base, then a target similar case set is obtained.
  • the target similar case set is obtained.
  • the similarity value with different test sets will be obtained at the same time. The one with the largest similarity is used as the final retrieval result, and the case set corresponding to the largest similarity value is used as the target similar case set.
  • Step S40 using the knowledge graph to analyze the case keywords and the target similar case set to generate a test case set
  • the knowledge graph is a knowledge domain visualization or a knowledge domain mapping map, which is a series of various graphs showing the relationship between the development process of knowledge and the structure.
  • the reasoning of the knowledge graph includes deductive reasoning and inductive reasoning. Since inductive reasoning can add new knowledge, inductive reasoning is mainly used in this application. Inductive reasoning can also use FOIL (First Order Inductive Learner) algorithm, association rule mining algorithm of incomplete knowledge base, and path sorting algorithm. Specifically, first use one or more of the above algorithms to learn or construct rules for the target similar case set, and then infer new entities based on the case keywords and the entities in the target similar case set according to the learned or constructed rules. The new entity can be constructed by reasoning to get the test case set. If the entered case keyword is the login password, and the existing target similar case set is the login name, the login password and login name can be used to inferentially generate a test case set related to the login password.
  • FOIL First Order Inductive Learner
  • the embodiment of the application provides a method for generating a test case set, obtaining case keywords, performing semantic analysis on the case keywords, and obtaining a semantic analysis result; according to the semantic analysis result, searching the test case set knowledge base to obtain the retrieval result ,
  • the test case set knowledge base is generated by training a preset training model constructed by the BERT model combined with the knowledge graph; if it is determined that there is a similar case set in the test case set knowledge base according to the search result, then obtain Target similar case set; use the knowledge graph to analyze the case keywords and the target similar case set, and reason to generate a test case set.
  • the existing test case set knowledge base can be used to search in the test case set knowledge base according to the case keywords to obtain the search results.
  • the knowledge graph is used to automatically Reasoning to generate a new set of test cases. Therefore, compared with the prior art, the present application can automatically generate a new test case set based on the existing test case set knowledge base, thereby improving the generation efficiency of the test case set.
  • the method for generating a test case set further includes:
  • Step A Perform preprocessing training on the preset training model according to the unlabeled first training test case set to obtain the initial training model, where the preset training model is constructed based on the BERT model combined with the knowledge graph;
  • the preset training model is preprocessed according to the unlabeled first training test case set to obtain the initial training model, where the preset training model is constructed based on the BERT model combined with the knowledge graph.
  • the unlabeled first training test case set is the test case set that has been stored before.
  • the preset training model is constructed based on the BERT model combined with the knowledge graph.
  • BERT can handle natural language semantic analysis, classification and other scenarios very well, but there are some shortcomings, such as the lack of common sense.
  • Test cases require a large amount of test background knowledge support. What BERT learns is a text matching model. A large amount of test background common sense is implicit and vague, and it is difficult to reflect in the pre-training data. At the same time, it lacks semantic understanding and reasoning. Therefore, the knowledge map information is incorporated in the pre-training process to organize the knowledge under test.
  • the calculation model based on symbolic semantics can provide prior knowledge for BERT, so that it has certain test common sense and reasoning ability.
  • BERT conducts pre-training on a large number of test case corpora to realize the understanding of the semantics of the test case text. Specifically, BERT first randomly hides some words under test, and then implements language representation through context prediction to obtain the initial training model. For example, for the sentence “Dylan wrote “Answers in the Wind” in 1962 and “Chronicles: Book One” in 2004, BERT can randomly hide “Dylan” and " 1962", “Answer is flying in the wind” and other words, but through continuous training, the model can determine the relationship between these words and can store the relationship between these words, so that the model can know the relationship between words The relationship of the linguistic representation.
  • the BERT is combined with the knowledge graph, and the multi-information entities in the knowledge graph are used (such as Dylan in the above example, "Answers in the Wind” and other specific examples) Used as external knowledge to improve language representation, so that the model can know the meaning of each word itself, not just the relationship between multiple words, and at the same time achieve structured knowledge coding and heterogeneous information fusion (among which, structured knowledge
  • the way of encoding is to transform abstract knowledge into vectors and other forms for language representation.
  • Heterogeneous information refers to different types of information such as vocabulary, syntax, and knowledge information
  • a preset training model is constructed by fusing the knowledge graph.
  • abstract knowledge information they need to be encoded so that knowledge can be used for language representation.
  • the encoding of words and the encoding of knowledge during BERT pre-training are different, although they are both converted into The vector is located in a different vector space. Therefore, it is necessary to design the model to realize the fusion of heterogeneous information such as vocabulary, syntax and knowledge information.
  • the BERT combined with the knowledge graph model can solve the above problems.
  • Step B Perform fine-tuning training on the initial training model according to the labeled second training test case set to obtain a language representation model
  • the initial training model is fine-tuned and trained according to the labeled second training test case set to obtain the language representation model.
  • the labeled second training test set is a training test set that is not in the existing first training test case set, such as a test case set of a new login name.
  • the labeled second training test case set can supplement the first training test set, so that the resulting language representation model is more comprehensive, and a test case set knowledge base that is more in line with the real test scenario can be constructed.
  • the fine-tuning training process is completed by two modules: text encoder and knowledge encoder.
  • the text encoder is responsible for obtaining the semantic information such as the morphology and syntax of the input tags of the second training test case set, and the tag vector, segmentation vector and position vector are summed to obtain the input vector, and then implemented by the multi-layer two-way conversion encoder For the extraction of semantic features.
  • the knowledge encoder integrates additional entity-oriented knowledge information into the text information from the bottom layer, so that the heterogeneous information of tags and entities can be represented in a unified feature space. Represents vector sequences labeled by ⁇ w 1, ..., w n ⁇ , with ⁇ e 1, ..., e n ⁇ be a vector representing the sequence entity. The two sequences are calculated according to the following formula:
  • MH-ATT is the attention layer.
  • h j represents the internal hidden state of the fusion mark and entity information
  • b represents the bias
  • W t represents the weight in the hidden layer
  • ⁇ () is the non-linear activation function
  • the language representation model is obtained, so that the test case set knowledge base can be obtained subsequently based on the language representation model.
  • Step C Classify the first training test case set and the second training test case set through the language representation model, and generate a test case set knowledge base according to the classification result;
  • the language representation model obtains the classification of different training test case sets based on the probability distribution calculation formula, and finally generates the test case set knowledge base, where the predicted probability distribution calculation formula is as follows:
  • linear() represents the linear layer.
  • test case set knowledge base from different types of cases, and then, for example, for the user name and password, both are It can be a landing case.
  • preprocessing training is performed on the preset training model according to the unlabeled first training test case set to obtain the initial training model, where the preset training model is constructed based on the BERT model combined with the knowledge graph; Perform fine-tuning training on the initial training model according to the labeled second training test case set to obtain a language representation model; classify the first training test case set and the second training test case set through the language representation model , According to the classification results to generate a test case set knowledge base.
  • the training model is trained to obtain the test case set knowledge base, which realizes the classification of different test case sets to facilitate subsequent retrieval, thereby improving the efficiency of subsequent test case set generation.
  • step A includes:
  • Step a1 Obtain the first attribute information of the unlabeled first training test case set
  • Step a2 dividing the first training test case set according to the first attribute information to obtain multiple first training test case subsets
  • Step a3 Perform preprocessing training on the preset training model according to a plurality of first training test case subsets to obtain corresponding multiple initial training models, wherein the preset training model is constructed based on the BERT model combined with the knowledge map of.
  • the training test case set (including the first training test case set and the second training test case set) can be divided according to the attribute information, so as to train to obtain multiple language representation models corresponding to different attribute information, and then combine the language The classification results and attribute information of the characterization model are classified, and the test case set knowledge base is constructed.
  • the training model input source is composed of four parts: the test knowledge public database, the BUG database, the business scenario database, and the training database.
  • the test knowledge public database is mainly common test cases with business commonality, such as login and password verification.
  • the BUG database is a set of BUG use cases found in production;
  • the business scenario library is a collection of test cases written in a specific business scenario, and
  • the training database is a set of test cases manually annotated on the TCTP platform.
  • the training parameter case set will be trained according to the training data of three latitudes: full product cases, specific project product cases, and personalized writing cases.
  • the first attribute information can include full product cases, specific project product cases, and personalized writing cases. And other different attributes.
  • the full product case is, for example, a set of test cases for a type of product such as insurance
  • the project product case is a set of test cases for a specific product such as login
  • a personalized insurance case can be a set of test cases associated with each writer.
  • the classification results of the same case in the initial training model formed by it may be different, and the association relationship between different entities may be different.
  • the preset training models are preprocessed according to a plurality of first training test case subsets respectively to obtain corresponding multiple initial training models, where the preset training models are constructed based on the BERT model combined with the knowledge graph.
  • step B includes:
  • Step b1 Obtain the labeled second training test case set and its second attribute information
  • Step b2 dividing according to the second attribute information and the second training test set to obtain a plurality of second training test case subsets
  • Step b3 Perform fine-tuning training on the corresponding initial training model according to a plurality of second training test case subsets, respectively, to obtain multiple language representation models corresponding to the second attribute information.
  • the initial training model that matches the second attribute information is determined, and the second training test In the case set, the second attribute information is added to the corresponding initial training model for fine-tuning training, and multiple language representation models are obtained.
  • step C includes:
  • Step c1 Classify the corresponding first training test case subset and the second training test case subset through multiple language representation models to obtain multiple test case sets corresponding to the first attribute information, and based on the Multiple test case sets generate test case set knowledge base;
  • test case set knowledge base includes cases, attributes (full product cases, specific project product cases, personalized product cases, personalized writing cases) and test sets. For different test sets, they will be classified into corresponding cases. At the same time, for the same test set, according to different attributes, different cases may correspond to different cases in the test case set sub-knowledge base of different attributes.
  • multiple speech representation models with different attributes are formed into the final test case set knowledge base, so as to ensure the integrity of the test case set knowledge base, and also enable the test case set to match more usage scenarios according to different attributes. It further improves the accuracy of the test case set generation.
  • the search range can be narrowed based on the input candidate attribute information, and the retrieval efficiency is improved, thereby improving the generation efficiency of the test case set.
  • the method for generating a test case set further includes:
  • Step D obtain candidate attribute information
  • the worker when the worker triggers the test case set generation instruction, in addition to the case keywords, he can also input candidate attribute information, where the candidate attribute information is the attribute information corresponding to the test case set to be generated At the same time, the candidate attribute information corresponds to the attribute information of each language representation model of the test case set knowledge base, that is, the candidate attribute information is used to give the associated test case set during retrieval.
  • the test case set generator can first Get candidate attribute information.
  • Step S20 includes:
  • Step E Determine a target test case set corresponding to the candidate attribute information in the test case set knowledge base
  • Step F retrieve the target test case set according to the semantic analysis result to obtain the similarity between the keyword of the case and the test case in the target test case set to obtain the retrieval result;
  • the output result gives a set of test cases that conform to the attribute according to the cases associated with the test attribute.
  • the candidate attribute is a specific project product case set
  • only the test case set knowledge base whose attribute is a specific product case set is retrieved, instead of retrieving the full product case and personalized writing case, the results can be retrieved at the same time through the efficiency of the retrieval process Also more accurate. According to the similarity between the case keywords and the test cases in the test case set of the corresponding attributes, the corresponding retrieval results are determined.
  • this embodiment can narrow the range of the test case set knowledge base that needs to be retrieved according to the candidate attribute information in the retrieval process, thereby passing the efficiency and accuracy of retrieval.
  • step S20 includes:
  • Step G detecting whether the similarity in the retrieval result is greater than or equal to a first preset threshold
  • Step H If the similarity is greater than or equal to the first preset threshold, it is determined that the same case set corresponding to the case keyword exists in the test case set knowledge base, then the target same case set is obtained, and Output
  • the similarity in the search result is greater than or equal to the first preset threshold.
  • the similarity is greater than or equal to the first preset threshold, it indicates that the current test case set knowledge base already exists and the input case keyword For the same case set, the target same case set is directly determined according to the similarity and output, and then the required test case set can be output.
  • step H it also includes:
  • Step 1 If the similarity is less than the first preset threshold, detecting whether the similarity is greater than a second preset threshold, where the second preset threshold is less than the first preset threshold;
  • the similarity is less than the first preset threshold, it means that the same test case set does not exist in the test case set knowledge base, but the test case set knowledge base generated by the BERT combined with the training model of the knowledge graph has a certain learning ability.
  • determine whether there are similar cases that is, determine whether the similarity is greater than the second preset threshold.
  • Step J If the similarity is greater than the second preset threshold, it is determined that there is a similar case set in the test case set knowledge base, and step S30 is executed: obtaining a target similar case set;
  • the knowledge graph is used for reasoning Ability reasoning generates and generates a set of test cases. If the case keyword is the user password, and it is determined that there is a similar test set in the test case set knowledge base as the user name related test set, if it does not contain special characters, the length is at least six characters, etc., it can be inferred that the user password does not contain A set of test cases with special characters and a length of at least six characters.
  • Step K If the similarity is less than or equal to the second preset threshold, output prompt information to prompt the user to manually generate a test case set;
  • test case in the test case set knowledge base differs greatly from the input case keywords.
  • the case corresponding to the input case keywords should be a brand new case. It is impossible to use the existing test case set for direct output or reasoning to generate a test case set. If the user needs to manually generate the test case set, then manually add the test case set.
  • step K it also includes:
  • Step k1 Obtain a set of labeled test cases manually generated by the user
  • Step k2 update the test case set knowledge base according to the labeled test case set
  • test case set knowledge base can learn from the labeled test case set, thereby expanding the test case set knowledge base.
  • the same test case set can be directly output according to the similarity according to the retrieval result, or the test case set can be generated by reasoning based on the similar test case set.
  • the test case set cannot be output according to the test case set knowledge base, it can be manually generated Annotated test case set is generated in the method, and then the test case set knowledge base is updated by annotated test case set to expand the test case set knowledge base.
  • the application also provides a device for generating a test case set.
  • FIG. 3 is a schematic diagram of the functional modules of the first embodiment of the apparatus for generating a test case set according to the present application.
  • the test case set generating device includes:
  • the analysis module 10 is used to obtain case keywords, perform semantic analysis on the case keywords, and obtain semantic analysis results;
  • the retrieval module 20 is configured to retrieve the test case set knowledge base according to the semantic analysis result to obtain the retrieval result, wherein the test case set knowledge base is generated by training a preset training model constructed by combining the BERT model and the knowledge graph ;
  • the first obtaining module 30 is configured to obtain a target similar case set if it is determined that there is a similar case set in the test case set knowledge base according to the search result;
  • the first generation module 40 is configured to analyze the case keywords and the target similar case set by using the knowledge graph, and generate a test case set by reasoning.
  • test case set generating device further includes:
  • the pre-training module performs pre-processing training on the preset training model according to the unlabeled first training test case set to obtain the initial training model, where the preset training model is constructed based on the BERT model combined with the knowledge map;
  • the fine-tuning training module is configured to perform fine-tuning training on the initial training model according to the labeled second training test case set to obtain a language representation model
  • the second generation module is configured to classify the first training test case set and the second training test case set through the language representation model, and generate a test case set knowledge base according to the classification result.
  • the pre-training module further includes:
  • the first acquiring unit is configured to acquire the first attribute information of the unlabeled first training test case set
  • the first dividing unit is configured to divide the first training test case set according to the first attribute information to obtain multiple first training test case subsets;
  • the pre-training unit is used to perform pre-processing training on the preset training model according to a plurality of first training test case subsets to obtain corresponding multiple initial training models, wherein the preset training model is based on the BERT model combined with knowledge
  • the map is constructed;
  • the fine-tuning training module further includes:
  • the second acquiring unit is used to acquire the labeled second training test case set and its second attribute information
  • the second dividing unit is configured to divide according to the second attribute information and the second training test set to obtain a plurality of second training test case subsets
  • the fine-tuning training unit is configured to perform fine-tuning training on the corresponding initial training model according to a plurality of second training test case subsets to obtain multiple language representation models corresponding to the second attribute information;
  • the second generating module further includes:
  • the first generating unit is configured to classify the corresponding first training test case subset and the second training test case subset through multiple language representation models to obtain multiple test case sets corresponding to the first attribute information, And generate a test case set knowledge base based on the multiple test case sets.
  • test case set generating device further includes:
  • the second obtaining unit is used to obtain candidate attribute information
  • the first acquisition module further includes:
  • a determining unit configured to determine a target test case set corresponding to the candidate attribute information in the test case set knowledge base
  • the third obtaining unit is configured to retrieve the target test case set according to the semantic analysis result to obtain the similarity between the case keywords and the test cases in the target test case set to obtain the retrieval result.
  • test case set generating device further includes:
  • the first detection module is configured to detect whether the similarity in the retrieval result is greater than or equal to a first preset threshold
  • the first output module is configured to determine that the same case set corresponding to the case keyword exists in the test case set knowledge base if the similarity is greater than or equal to the first preset threshold, and the acquisition target is the same Case collection and output.
  • test case set generating device further includes:
  • the second detection module is configured to detect whether the similarity is greater than a second preset threshold if the similarity is less than the first preset threshold, wherein the second preset threshold is less than the first preset Set threshold
  • the fourth obtaining module is configured to determine that there is a similar case set in the test case set knowledge base if the similarity is greater than the second preset threshold, and then execute the step of: obtaining a target similar case set;
  • the second generation module is configured to output prompt information to prompt the user to manually generate a test case set if the similarity is less than or equal to the second preset threshold.
  • test case set generating device further includes:
  • the fifth acquisition module is used to acquire a set of labeled test cases manually generated by the user
  • the update module is used to update the test case set knowledge base according to the labeled test case set.
  • each module in the above-mentioned test case set generation device corresponds to each step in the above-mentioned test case set generation method embodiment, and its functions and realization processes are not repeated here.
  • the present application also provides a computer-readable storage medium with a test case set generation program stored on the computer-readable storage medium.
  • the test case set generation program is executed by a processor to achieve the above The steps of the test case set generation method.

Abstract

La présente demande se rapporte au domaine technique de la technologie financière. Un procédé, un appareil et un dispositif de génération d'ensemble de cas de test ainsi qu'un support d'informations lisible par ordinateur sont divulgués. Le procédé de génération d'ensemble de cas de test consiste à : obtenir un mot-clé de cas et effectuer une analyse sémantique sur le mot-clé de cas pour obtenir le résultat d'analyse sémantique ; effectuer une recherche dans une base de connaissances d'ensemble de cas de test en fonction du résultat d'analyse sémantique pour obtenir le résultat de recherche, la base de connaissances d'ensemble de cas de test étant générée en entraînant un modèle d'apprentissage prédéfini obtenu par construction par un modèle BERT par combinaison d'une carte de connaissances ; en fonction du résultat de la recherche, s'il est déterminé qu'un ensemble de cas similaires existe dans la base de connaissances de l'ensemble de cas de test, obtenir un ensemble de cas similaires cible ; et utiliser la carte de connaissances pour analyser le mot-clé de cas et l'ensemble de cas similaires cible pour générer un ensemble de cas de test.
PCT/CN2021/081873 2020-06-18 2021-03-19 Procédé, appareil et dispositif de génération d'ensemble de cas de test, et support de stockage lisible par ordinateur WO2021253904A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010563141.X 2020-06-18
CN202010563141.XA CN111708703A (zh) 2020-06-18 2020-06-18 测试案例集生成方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021253904A1 true WO2021253904A1 (fr) 2021-12-23

Family

ID=72541309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081873 WO2021253904A1 (fr) 2020-06-18 2021-03-19 Procédé, appareil et dispositif de génération d'ensemble de cas de test, et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN111708703A (fr)
WO (1) WO2021253904A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230088970A1 (en) * 2021-09-20 2023-03-23 Salesforce.Com, Inc. Impact analysis based on api functional tesing
CN117033253A (zh) * 2023-10-10 2023-11-10 北京轻松怡康信息技术有限公司 一种接口测试方法、装置、电子设备及存储介质
CN117057173A (zh) * 2023-10-13 2023-11-14 浙江大学 一种支持发散思维的仿生设计方法、系统及电子设备
CN117453576A (zh) * 2023-12-25 2024-01-26 企迈科技有限公司 基于DXM模型的SaaS软件测试用例构建方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708703A (zh) * 2020-06-18 2020-09-25 深圳前海微众银行股份有限公司 测试案例集生成方法、装置、设备及计算机可读存储介质
CN112256566B (zh) * 2020-09-28 2024-03-05 中国建设银行股份有限公司 一种测试案例的保鲜方法和装置
CN112199285B (zh) * 2020-10-12 2023-08-01 中国农业银行股份有限公司 一种测试案例优选方法、装置及电子设备
CN112528893A (zh) * 2020-12-15 2021-03-19 南京中兴力维软件有限公司 异常状态的识别方法、装置及计算机可读存储介质
CN112906361A (zh) * 2021-02-09 2021-06-04 上海明略人工智能(集团)有限公司 文本数据的标注方法和装置、电子设备和存储介质
CN113392642B (zh) * 2021-06-04 2023-06-02 北京师范大学 一种基于元学习的育人案例自动标注系统及方法
CN113609011B (zh) * 2021-07-30 2023-11-03 建信人寿保险股份有限公司 一种保险产品工厂的测试方法、装置、介质和设备
CN113900954B (zh) * 2021-10-28 2022-06-10 航天中认软件测评科技(北京)有限责任公司 一种使用知识图谱的测试用例推荐方法及装置
CN115328813B (zh) * 2022-10-11 2023-02-03 成都飞机工业(集团)有限责任公司 一种测试用例设计方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178273A1 (en) * 2013-12-20 2015-06-25 Microsoft Corporation Unsupervised Relation Detection Model Training
CN109885823A (zh) * 2017-12-01 2019-06-14 武汉楚鼎信息技术有限公司 一种金融行业的分布式语义识别方法及系统装置
CN110727779A (zh) * 2019-10-16 2020-01-24 信雅达系统工程股份有限公司 基于多模型融合的问答方法及系统
CN111160756A (zh) * 2019-12-26 2020-05-15 马上游科技股份有限公司 基于二次人工智能算法的景区评估方法及模型
CN111708703A (zh) * 2020-06-18 2020-09-25 深圳前海微众银行股份有限公司 测试案例集生成方法、装置、设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178273A1 (en) * 2013-12-20 2015-06-25 Microsoft Corporation Unsupervised Relation Detection Model Training
CN109885823A (zh) * 2017-12-01 2019-06-14 武汉楚鼎信息技术有限公司 一种金融行业的分布式语义识别方法及系统装置
CN110727779A (zh) * 2019-10-16 2020-01-24 信雅达系统工程股份有限公司 基于多模型融合的问答方法及系统
CN111160756A (zh) * 2019-12-26 2020-05-15 马上游科技股份有限公司 基于二次人工智能算法的景区评估方法及模型
CN111708703A (zh) * 2020-06-18 2020-09-25 深圳前海微众银行股份有限公司 测试案例集生成方法、装置、设备及计算机可读存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230088970A1 (en) * 2021-09-20 2023-03-23 Salesforce.Com, Inc. Impact analysis based on api functional tesing
CN117033253A (zh) * 2023-10-10 2023-11-10 北京轻松怡康信息技术有限公司 一种接口测试方法、装置、电子设备及存储介质
CN117057173A (zh) * 2023-10-13 2023-11-14 浙江大学 一种支持发散思维的仿生设计方法、系统及电子设备
CN117057173B (zh) * 2023-10-13 2024-01-05 浙江大学 一种支持发散思维的仿生设计方法、系统及电子设备
CN117453576A (zh) * 2023-12-25 2024-01-26 企迈科技有限公司 基于DXM模型的SaaS软件测试用例构建方法
CN117453576B (zh) * 2023-12-25 2024-04-09 企迈科技有限公司 基于DXM模型的SaaS软件测试用例构建方法

Also Published As

Publication number Publication date
CN111708703A (zh) 2020-09-25

Similar Documents

Publication Publication Date Title
WO2021253904A1 (fr) Procédé, appareil et dispositif de génération d'ensemble de cas de test, et support de stockage lisible par ordinateur
CN111160017B (zh) 关键词抽取方法、话术评分方法以及话术推荐方法
CN109697162B (zh) 一种基于开源代码库的软件缺陷自动检测方法
CN108829757B (zh) 一种聊天机器人的智能服务方法、服务器及存储介质
WO2021042503A1 (fr) Procédé d'extraction de classification d'informations, appareil, dispositif informatique et support de stockage
CN111026671B (zh) 测试用例集构建方法和基于测试用例集的测试方法
CN110727779A (zh) 基于多模型融合的问答方法及系统
CN109033305A (zh) 问题回答方法、设备及计算机可读存储介质
US20210117625A1 (en) Semantic parsing of natural language query
CN113094578B (zh) 基于深度学习的内容推荐方法、装置、设备及存储介质
CN107844533A (zh) 一种智能问答系统及分析方法
US20210056127A1 (en) Method for multi-modal retrieval and clustering using deep cca and active pairwise queries
CN112632226B (zh) 基于法律知识图谱的语义搜索方法、装置和电子设备
Banerjee et al. Bengali question classification: Towards developing qa system
CN110457585B (zh) 负面文本的推送方法、装置、系统及计算机设备
CN113742733A (zh) 阅读理解漏洞事件触发词抽取和漏洞类型识别方法及装置
CN111274822A (zh) 语义匹配方法、装置、设备及存储介质
CN111177402A (zh) 基于分词处理的评价方法、装置、计算机设备及存储介质
CN112100377A (zh) 文本分类方法、装置、计算机设备和存储介质
CN113434418A (zh) 知识驱动的软件缺陷检测与分析方法及系统
CN114647713A (zh) 基于虚拟对抗的知识图谱问答方法、设备及存储介质
CN115641101A (zh) 一种智能化招聘的方法、装置及计算机可读介质
CN116842194A (zh) 一种电力语义知识图谱系统及方法
CN113377844A (zh) 面向大型关系型数据库的对话式数据模糊检索方法及装置
CN116611449A (zh) 异常日志解析方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21826586

Country of ref document: EP

Kind code of ref document: A1