CN113836005A - Virtual user generation method and device, electronic equipment and storage medium - Google Patents
Virtual user generation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113836005A CN113836005A CN202111045204.3A CN202111045204A CN113836005A CN 113836005 A CN113836005 A CN 113836005A CN 202111045204 A CN202111045204 A CN 202111045204A CN 113836005 A CN113836005 A CN 113836005A
- Authority
- CN
- China
- Prior art keywords
- target
- user
- sample set
- hash value
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000003062 neural network model Methods 0.000 claims abstract description 34
- 230000000306 recurrent effect Effects 0.000 claims abstract description 26
- 230000001537 neural effect Effects 0.000 claims description 43
- 238000012360 testing method Methods 0.000 claims description 43
- 238000012549 training Methods 0.000 claims description 39
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000006399 behavior Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 21
- 238000013139 quantization Methods 0.000 claims description 21
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 125000004122 cyclic group Chemical group 0.000 claims description 6
- 230000006870 function Effects 0.000 description 14
- 230000009471 action Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000011176 pooling Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a virtual user generation method, a virtual user generation device, electronic equipment and a storage medium. Because the time sequence sample set obtained through the trained recurrent neural network model contains the dependency relationship among the user data at different moments, the virtual user generated according to the time sequence sample set can more accurately simulate the real user according to the dependency relationship.
Description
Technical Field
The invention relates to the field of testing, in particular to a virtual user generation method and device, electronic equipment and a storage medium.
Background
With the development of the internet, more and more services are realized through the internet. Currently, in the internet, services are generally implemented through corresponding applications. Testing of applications is an important means to ensure the quality of applications. In current application testing, there are two main testing methods if it is necessary to test an application according to a user's behavior.
One is manual participation, for example, when concurrent login time of an application needs to be tested, more people are required to log in the application at the same time. Another approach is to generate a virtual user and then test the application against the virtual user. Because of the large workload and low efficiency of manual participation, a virtual user method is commonly used to test the application program at present. However, currently generated virtual users cannot effectively simulate real users, thereby causing errors in testing.
Disclosure of Invention
The embodiment of the application provides a virtual user generation method and device, electronic equipment and a storage medium, and can solve the technical problem that the currently generated virtual user cannot effectively simulate a real user.
The embodiment of the application provides a method for generating a virtual user, which comprises the following steps:
acquiring user data, and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
inputting the initial sample set into a trained recurrent neural network model, and extracting the dependency relationship among samples at different moments to obtain a time sequence sample set;
and generating a virtual user according to the time series sample set.
Based on one aspect provided by the present application, an embodiment of the present invention further provides a device for generating a virtual user, including:
the first input module is used for acquiring user data and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
the second input module is used for inputting the initial sample set into a trained recurrent neural network model, extracting the dependency relationship among samples at different moments and obtaining a time sequence sample set;
and the generating module is used for generating the virtual user according to the time series sample set.
In addition, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores a computer program, and the processor is configured to run the computer program in the memory to implement the method for generating a virtual user according to the embodiment of the present invention.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program is suitable for being loaded by a processor to perform any one of the methods for generating a virtual user provided in the embodiment of the present invention.
In the embodiment of the application, the acquired user data is firstly input into a trained convolutional neural model for identification to obtain an initial sample set corresponding to each real user, and then the initial sample set is input into the trained convolutional neural network model to extract the dependency relationship among samples at different moments to obtain a time sequence sample set. Because the time sequence sample set obtained through the trained recurrent neural network model contains the dependency relationship among the user data at different moments, the virtual user generated according to the time sequence sample set can more accurately simulate the real user according to the dependency relationship.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for generating a virtual user according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a trained recurrent neural network model provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature matrix provided by an embodiment of the invention;
FIG. 4 is a schematic structural diagram of a convolutional neural model to be trained according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a virtual user generating apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a virtual user generation method and device, electronic equipment and a storage medium. The virtual user generating device may be integrated in the electronic device, and the virtual user generating method may be applied to the electronic device. The electronic device may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, or the like, and the specific type of the electronic device is not specifically limited in the present application.
To ensure the performance of an application, the application is typically tested before the application or a new version of the application is brought online. Currently, virtual users are typically constructed to simulate real users to test applications.
There are two main methods for constructing virtual users, one is self-created by developers. A self-constructed virtual user cannot effectively simulate a real user. And the other method is to cluster the data of the access application program of the real user and then randomly select the clustered data to construct the virtual user. However, the dependency relationship between the behaviors of the real users is lost in the clustering process, for example, some access behaviors must be before or after other access behaviors (for example, the user needs to register on the application program first, and then the user can log on the application program, or for example, the user needs to log on the application program first, and then can perform other operations on the application program). Due to the loss of these dependencies, the generated virtual users cannot effectively simulate real users, and further errors occur when testing is performed using these virtual users.
In order to solve the technical problem that the currently generated virtual user cannot effectively simulate a real user, the embodiment of the application provides a virtual user generation method. In the method, acquired user data is firstly input into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user, and then the initial sample set is input into the trained convolutional neural network model to extract the dependency relationship among samples at different moments to obtain a time sequence sample set. Because the time sequence sample set obtained through the trained recurrent neural network model contains the dependency relationship among the user data at different moments, the virtual user generated according to the time sequence sample set can more accurately simulate the real user according to the dependency relationship.
The following describes in detail a method for generating a virtual user according to an embodiment of the present application. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
As shown in fig. 1, the specific flow of the virtual user generation method is as follows:
and S101, acquiring user data, inputting the user data into a trained convolutional neural model for recognition, and obtaining an initial sample set corresponding to each real user.
The subscriber data includes both subscriber line data and subscriber line data. The user data is user data generated by a user without virtual media such as internet, and the user data can be acquired by a camera.
The user online data refers to user data generated by a user through virtual media such as the internet, and the user online data can be acquired through a buried point technology.
After all user data are obtained, the electronic device inputs the user data into a trained Convolutional Neural Network (CNN) for recognition, so as to obtain an initial sample set corresponding to each real user.
And S102, inputting the initial sample set into the trained recurrent neural network model, and extracting the dependency relationship among samples at different moments to obtain a time sequence sample set.
The dependencies between samples at different time instants refer to dependencies between samples at different time instants in the same initial sample set.
The dependency relationship refers to a relationship between the user behavior data at the time a and the user behavior data at the time B when the user behavior data at the time a affects the user behavior data at the time B. For example, the sample at the time a is a login application behavior of the user, the sample at the time B is a collection content behavior of the user on the application, and the sample at the time B can be generated only on the basis of the sample at the time a, so that the sample at the time a and the sample at the time B have a dependency relationship.
In the embodiment of the present application, the trained Recurrent Neural Network (RNN) includes a hidden layer and an output layer, that is, the initial sample set is directly input into the hidden layer of the trained Recurrent Neural Network. The specific operation of the trained recurrent neural network model is described below with reference to fig. 2.
Referring to 201 in fig. 2, after inputting a sample X into a hidden layer in a trained neural network model for feature extraction, a feature value S can be obtained. And then inputting the characteristic value S into an output layer to obtain a sample O, and simultaneously storing the characteristic value S, so that when the sample is input into a hidden layer in the trained neural network model at the next moment, the characteristic value at the next moment can be calculated according to the sample and the characteristic value S input at the next moment.
For example, referring to 202 in fig. 2, 202 in fig. 2 is a diagram of 201 in fig. 2 expanded in time sequence. From 202 in FIG. 2, sample X is sampled at time ttOutput S of hidden layer at time t after input into hidden layertNot only with sample XtConcerning the hidden layer output also at (t-1)Value S oft-1In relation, it can be formulated as:
St=f(U×Xt+W×St-1)
u represents the weight value between the hidden layer and the fully-connected layer of the trained convolutional neural network model, W represents the loop weight of the hidden layer, and f () represents the activation function.
The output value of the output layer can be expressed by the following formula:
Ot=g(V×St)
v denotes a weight value between the hidden layer and the output layer, and g () denotes a softmax function.
As can be seen from the above formula, in the trained recurrent neural network model, the output value of the sample at the current time is affected by the input sample at the last time. Therefore, the dependence relationship between samples at different time can be extracted through the trained recurrent neural network model. Therefore, after the electronic device inputs the initial sample set into the trained recurrent neural network model, the dependency relationship between samples at different times can be extracted to obtain the time series sample set.
And step S103, generating a virtual user according to the time series sample set.
After obtaining the time-series sample set, the electronic device may generate a virtual user according to the time-series sample set. A virtual user refers to a test script generated from a set of time series samples.
As can be seen from the above, in the embodiment of the present application, the obtained user data is first input into the trained convolutional neural model for identification, so as to obtain the initial sample set corresponding to each real user, and then the initial sample set is input into the trained convolutional neural network model to extract the dependency relationships between samples at different times, so as to obtain the time series sample set. Because the time sequence sample set obtained through the trained recurrent neural network model contains the dependency relationship among the user data at different moments, the virtual user generated according to the time sequence sample set can more accurately simulate the real user according to the dependency relationship.
The method for creating the virtual user will be further described below.
In some embodiments, after the acquiring the user data in step S101, the method further includes: and carrying out Hash operation on the user data to obtain a target Hash value set corresponding to the user data. I.e. user data is represented by a target hash value for subsequent calculations. The target hash value may consist of three fields, a behavior header, an action value, and a state parameter.
For example, a piece of user data is represented by a target user hash value 1001202. Wherein field 100 in target hash value 1001202 represents a behavior header, e.g., field 100 in target hash value 1001202 represents a user's login behavior; the field 12 in the target hash value 1001202 represents an action value, for example, if the user needs to input an account and a password to log in, the field 12 in the target hash value 1001202 represents the action of the user inputting the account and the password; for example, if the user inputs an account number and a password and the login is successful or failed, the field 02 of the target hash value 1001202 indicates the status parameter, and the field 02 of the target hash value 1001202 indicates the login success or failure.
It should be noted that, before performing the hash operation on the subscriber line data, the subscriber line data may be decoded, and then the decoded data may be subjected to the hash operation.
After the computer performs hash operation on the user data to obtain the target hash value, the computer may check the target hash value so as to filter out an erroneous target hash value. Therefore, in other embodiments, the detailed process of performing the hash operation on the user data to obtain the target hash value set corresponding to the user data may be as follows:
and carrying out Hash operation on the user data to obtain an initial Hash value set. The target hash value in the initial set of hash values is then checked. And if the first target hash value with errors exists, searching a second target hash value with the same field as the first target hash value from the initial hash value set. And if the time interval between the first target hash value and the second target hash value meets a first preset condition, combining the first target hash value and the second target hash value to obtain a target hash value set.
For example, if the first target hash value is 31023NA (NA represents number missing here) in the initial hash value set, the second target hash value comprising field 310 and/or field 23 is looked up from the initial hash value set. If the second target hash value 3102304 is found and the time interval between the second target hash value 3102304 and the first target hash value 31023NA satisfies the first predetermined condition, the second target hash value 3102304 and the first target hash value 31023NA are combined to obtain the target hash value 3102304.
And if a second target hash value with the same field as the first target hash value is not found, deleting the first target hash value to obtain a target hash value set.
It should be appreciated that the first target hash value may be looked up from the initial set of hash values by a binary lookup method. The first preset condition can be set by the user according to actual conditions.
In other embodiments, if the user data includes user behavior data and user attribute data, performing hash operation on the user data to obtain a target hash value set corresponding to the user data, including:
and carrying out Hash operation on the user behavior data and the user attribute data to obtain a first sub-Hash value corresponding to the user attribute data and a second sub-Hash value corresponding to the user behavior data. And then extracting the characteristic value of the first sub-hash value, and splicing the characteristic value with the second sub-hash value to obtain a target hash value, wherein the target hash value forms a target hash value set.
The concatenation may be horizontal concatenation, that is, directly concatenating the feature value after the second sub-hash value.
It should be noted that there are multiple pieces of user data, and each piece of user data has a corresponding target hash value, so the target hash value set may include multiple target hash values.
After the feature value of the first sub-hash value is extracted, the correctness of the feature value can be checked. And if the verification is correct, splicing the characteristic value which is verified to be correct with the second sub-hash value. The process of checking the characteristic value can be as follows:
and searching a characteristic value interval corresponding to the characteristic value of the first sub-hash value, and if the characteristic value interval corresponding to the characteristic value is not found and the action value field in the first sub-hash value does not have a corresponding semantic value, marking the first sub-hash value and carrying out error reminding.
For example, there are 3 eigenvalue sections, where the section 1(00-50), the section 2(51-200) and the section 3(201-500) represent belonging to the high frequency access state, the section 2 represents the medium frequency access state, and the section 3 represents the low frequency access state. If the feature value of the first sub-hash value is 501, the feature value at this time has no corresponding section. If the semantic value corresponding to the action value of the first sub-hash value is long-pressed at the same time, but the target application program (assuming that the user data at this time is data for accessing the target application program) does not have a function module of long-pressing, then the first sub-hash value does not have a corresponding semantic value at this time. Therefore, the first sub-hash value is considered to be in error at this time.
In addition, after the target hash values are obtained, the target hash values with similar behavior headers can be further distributed into the same target hash value subset, and the target hash value subset forms a target hash value set. And the position of the target hash subset in the target hash set can be determined according to the size of the action header.
For example, the target hash values for the action headers 310 and 314 are grouped into the same subset of target hash values, and the target hash values for the action headers 20 and 22 are grouped into the same subset of target hash values.
The similar behavior header indicates that the user data corresponding to the two target hash values are similar. For example, the action header 310 represents the login action of the real user, and the action header 314 represents the registration action of the real user.
After the target hash value is obtained, the electronic device inputs the target hash value set into the trained convolutional neural model to obtain an initial sample set corresponding to each real user.
In other embodiments, before inputting the user data into the trained convolutional neural model for recognition, the method further comprises:
and acquiring training user data, and inputting the training user data into the convolutional neural model to be trained for recognition to obtain each training sample set. And then inputting the training sample set into a to-be-trained recurrent neural network model to obtain a loss value. If the loss value is larger than the preset threshold value, updating the network parameters of the convolutional neural model to be trained and the network parameters of the cyclic neural network model to be trained according to the loss value, and returning to execute the acquisition of training user data; and if the loss value is less than or equal to the preset threshold value, obtaining the trained convolutional neural network module and the trained cyclic neural network model.
Before inputting training user data into the convolutional neural model to be trained, network parameters of the convolutional neural model to be trained, such as the size of convolutional cores of convolutional layers, elements in the convolutional cores, and convolutional step sizes, need to be set in advance. And then updated after inputting the training user data to the convolutional neural model to be trained. However, if the elements in the convolution kernel of the convolutional layer are set incorrectly, the number of times of training is increased, thereby increasing the training time of the neural network model.
In order to solve the technical problem, in some possible implementation manners, training user data is input into a convolutional neural model to be trained for recognition, and a detailed process of obtaining each training sample set may be:
inputting training user data into a convolution layer in a convolution neural model to be trained for feature extraction to obtain a feature matrix. A first element and a second element in the feature matrix are then determined. And then carrying out difference operation on the first element and the second element to obtain an operation value. And if the operation value is within the preset threshold interval, updating the convolution kernel of the convolution layer according to the convolution kernel filling principle, and returning to execute the convolution layer for inputting the training user data into the convolution neural model to be trained to perform feature extraction to obtain a feature matrix. And if the operation value is not within the preset threshold interval, inputting the characteristic matrix into a full-connection layer in the convolutional neural model to be trained for identification to obtain each training sample set.
For example, as shown in fig. 3, the feature matrix is obtained by performing a difference operation on the first element 2 and the second element 8 to obtain an operation value 6, assuming that the first element is 2 and the second element is 8. And if the operation value 6 is in the preset threshold interval, which indicates that the convolution kernel is set incorrectly, updating the convolution kernel of the convolution layer according to the convolution kernel filling principle, and returning to execute the step of inputting the training user data to the convolution layer in the convolutional neural model to be trained for feature extraction to obtain a feature matrix. And if the operation value 6 is not in the preset threshold interval, which indicates that the convolution kernel is correctly set, inputting the characteristic matrix into a full-connection layer in the convolution neural model to be trained for identification, and obtaining each training sample set.
For the convolution kernel filling principle, the preset threshold interval, the first element and the second element, a user may set the first element and the second element according to actual conditions, which is not limited herein.
According to the method and the device, after the characteristic matrix is obtained, differential operation is further performed on the first element and the second element to obtain an operation value, and whether the elements in the convolution kernel need to be updated or not is determined according to the operation value and a preset threshold interval so that the correctness of the set convolution kernel is improved, the training times of a subsequent neural network model are reduced, and the training efficiency is improved.
In addition, referring to fig. 4, the convolutional neural model to be trained in the present application may further include a pooling layer and an activation function layer. The activation function layer is located behind the convolution layer and the pooling layer is located between the activation function layer and the fully-connected layer. The pooling layer may perform maximum pooling operations and/or average pooling on the feature matrix.
At the moment, if the operation value is not within the preset threshold interval, inputting the characteristic matrix into an activation function layer in the convolutional neural model to be trained for nonlinear operation to obtain an operation matrix, then inputting the operation matrix into a pooling layer for dimensionality reduction operation to obtain a pooling matrix, and finally inputting the pooling matrix into a full-connection layer for identification to obtain each training sample set.
In other embodiments, after the initial sample set is input into the trained recurrent neural network model to extract the dependency relationship between samples at different times and obtain the time series sample set, because the dimension of the time series sample set is high, in order to save the storage space, the time series sample set may be quantized to obtain the target quantized sample set. And then generating a virtual user according to the target quantization sample set.
The quantization algorithm may be selected by a user according to actual conditions, and the present application is not limited herein. Alternatively, when the Product Quantization (PQ) algorithm is selected as the Quantization algorithm, the detailed process of quantizing the actual sequence sample combination according to the Quantization algorithm may be:
and splitting the time sequence sample set according to the dimension of the time sequence sample set to obtain a preset number of time sequence sample subsets. And then clustering the samples in each time sequence sample subset to obtain the clustering center of each time sequence sample subset and the serial number of the clustering center, and representing the samples in the time sequence sample subset by using the serial numbers of the clustering centers of the time sequence sample subsets, so that the time sequence sample set can be represented by using the serial numbers of the clustering centers of each time sequence sample subset, and further the storage space is greatly saved.
For example, the dimension of the time series sample set is D, the time series sample set is split into M time series sample subsets, the dimension of each time series sample subset is D/M, and finally, the time series sample set can be represented by the numbers of M cluster centers.
After the time-series sample set is quantized, for the method of calculating the distance between the samples in the two time-series sample subsets, the user may use a symmetric distance calculation method and an asymmetric distance calculation method, which is not limited herein.
The symmetric distance calculation method is to calculate the distance between the sample x and the sample y by using the distance between the cluster center q (x) corresponding to the sample x and the cluster center q (y) corresponding to the sample y. The asymmetric distance calculation method is to calculate the distance between the sample x and the sample y, and to replace the distance between the cluster centers q (y) corresponding to the sample x and the sample y.
In this embodiment, the time-series sample set is quantized to obtain a target quantized sample set. And then generating a virtual user according to the target quantization sample set, thereby saving the storage space.
In other embodiments, errors may occur due to the quantized sample set. Thus, after quantizing the time-series sample set, the quantized sample set can be checked for correctness. The specific checking process may be:
and quantizing the time sequence sample set according to a quantization algorithm to obtain an initial quantized sample set. And then, according to a local comparison algorithm, locally matching the time sequence sample set with the initial quantization sample set. The initial quantized sample set that is successfully matched can be used as the target quantized sample set.
For the local contrast algorithm, the user may select the local contrast algorithm according to the actual situation, which is not limited herein. When the local alignment algorithm adopts Smith-Waterman algorithm (SW), the specific process of locally matching the time series sample set with the initial quantization sample set is as follows:
assume that the initial quantized sample set is a, a ═ a1a2...anThe length of A is n, the time series sample set is B, B ═ B1b2...bmAnd the length of B is m. Setting a matching fraction matrix H and initializing a first row H of the configuration fraction matrix Hk0And first column H0lIs 0 (1. ltoreq. k. ltoreq. n, 1. ltoreq. l. ltoreq. m), the other rows and other columns configuring the fractional matrix H can be determined according to the following formula:
wherein, s (a)i,bj) Denotes aiAnd bjW1 represents a gap penalty set to 2, 1 ≦ i ≦ n,1 ≦ j ≦ m. And backtracking after obtaining the configuration fractional matrix H, wherein the backtracking is started from the item with the largest element in the configuration fractional matrix H. If ai=bjThen go back to the top left element, a, of the configuration score matrix Hi≠bjAnd backtracking to the maximum value of the values of the upper left corner element, the upper edge element and the left edge element of the configuration score matrix H, and if the maximum values are the same, selecting the upper left corner element, the upper edge element and the left edge element according to the priority.
And after the backtracking is finished, writing out the matched characters according to the backtracking path. If go back to the top left corner element, get aiAdding to the sequence A', adding bjAddition to sequence B'; if the upper edge element is traced back, a isiAdd to sequence a ', add _ to sequence B'; if trace back to the left element, add _ to sequence A', add bjAdded to the sequence B'. And ending when the local optimal matching sequence is obtained. And finally, according to the sequence A 'and the sequence B', the matching degree of the time sequence sample set and the initial quantization sample set can be determined.
In this embodiment, the time series sample set and the initial quantized sample set are locally matched by a local alignment algorithm. Finally, only the successfully matched initial quantized sample set can be used as a target quantized sample set, so that the accuracy of the target quantized sample set is improved, and the accuracy of a virtual user living according to the target quantized sample set is improved.
Currently, it is typically a manual operation to associate a virtual user with an application. The manual operation efficiency is low, and the consumed time is long. In order to solve the technical problem, in the embodiment of the present application, after generating a virtual user according to a time-series sample set, the method further includes:
and generating a first test case of the to-be-released version of the target application program according to the virtual user. And then testing the version to be released according to the first test case to obtain a first threshold value. And then generating a second test case of the released version of the target application program according to the user data. And testing according to the released version of the second test case test to obtain a second threshold value. And finally, determining a target value according to the first threshold and the second threshold. And if the target value meets a second preset condition, which indicates that the virtual user is suitable for testing the target application program, performing associated storage on the virtual user and the target application program. So that subsequent testing of the target application can be performed directly using the virtual user associated with the target application.
The test case refers to the description of the test task for a specific software product, including test objects, test environments, input data, test steps, expected results, test scripts, and the like. The second preset condition may be set according to an actual situation, and the present application is not limited herein. For example, when the target value is obtained by dividing the first threshold by the second threshold, the second preset condition may be set such that the target threshold is greater than or equal to 90%.
In this embodiment, a first threshold is obtained by testing a to-be-released version of a target application program using a virtual user, a released version of the target application program is tested using user data to obtain a second threshold, a target value is determined according to the first threshold and the second threshold, and finally whether the virtual user is suitable for the target application program is automatically determined according to the target value, so that when the virtual user is suitable for the target application program, the virtual user and the target application program can be automatically associated.
Although when the target value satisfies the second preset condition, the virtual user is illustrated as being suitable for the target application program. However, a mismatch between the virtual user and the target application may still occur. For example, the real user corresponding to the user data for generating the virtual user is a female, but the target application program is a male. For another example, the user data for generating the virtual user includes data for long-press operation, but the target application program does not long-press the functional module. The result of finally testing the target application using the virtual user is not referential even if the virtual user generated from the data of the real user fits the target application.
In order to prevent this, the embodiment of the present application further includes: if the target value meets a second preset condition, determining a target parameter according to the virtual user; and comparing the target parameters with preset parameters of the target application program. And if the target parameters are consistent with the preset parameters, performing associated storage on the virtual user and the target application program.
The preset parameters of the target application may include user attribute parameters and/or user behavior parameters.
In this embodiment, when the target value satisfies the second preset condition, the target parameter is further determined according to the virtual user, and then the target parameter is compared with the preset parameter of the target application program. And only when the target parameters are consistent with the preset parameters, the virtual user and the target application program are subjected to associated storage, so that the matching accuracy of the virtual user and the target application program is further improved.
In order to better implement the method, correspondingly, the embodiment of the invention further provides a virtual user generating device, which is specifically integrated in the electronic device.
Referring to fig. 5, the virtual user generation apparatus includes:
the first input module 501 is configured to obtain user data, and input the user data into a trained convolutional neural model for recognition, so as to obtain an initial sample set corresponding to each real user.
A second input module 502, configured to input the initial sample set into the trained recurrent neural network model, extract a dependency relationship between samples at different times, and obtain a time series sample set.
A generating module 503, configured to generate a virtual user according to the time-series sample set.
Optionally, the virtual user generating device further includes:
and the Hash operation module is used for carrying out Hash operation on the user data to obtain a target Hash value set corresponding to the user data.
Accordingly, the first input module 501 is configured to perform:
and inputting the target hash value set into the trained convolutional neural model to obtain an initial sample set corresponding to each real user.
Optionally, the hash operation module is specifically configured to perform:
carrying out Hash operation on user data to obtain an initial Hash value set;
checking a target hash value in the initial hash value set;
if the first target hash value with errors exists, searching a second target hash value with the same field as the first target hash value from the initial hash value set;
and if the time interval between the first target hash value and the second target hash value meets a first preset condition, combining the first target hash value and the second target hash value to obtain a target hash value set.
Optionally, the user data comprises user behavior data and user attribute data; correspondingly, the hash operation module is specifically configured to perform:
performing hash operation on the user behavior data and the user attribute data to obtain a first sub-hash value corresponding to the user attribute data and a second sub-hash value corresponding to the user behavior data;
and extracting the characteristic value of the first sub-hash value, and splicing the characteristic value and the second sub-hash value to obtain a target hash value, wherein the target hash value forms a target hash value set.
Optionally, the virtual user generating device further includes:
and the quantization module is used for quantizing the time sequence sample set according to a quantization algorithm to obtain a target quantized sample set.
Accordingly, the generating module 503 is specifically configured to perform:
and generating a virtual user according to the target quantization sample set.
Optionally, the quantization module is specifically configured to perform:
quantizing the time sequence sample set according to a quantization algorithm to obtain an initial quantized sample set;
according to a local comparison algorithm, locally matching a time sequence sample set with an initial quantization sample set;
and taking the initial quantized sample set successfully matched as a target quantized sample set.
Optionally, the virtual user generating device further includes:
a first test module to:
generating a first test case of a to-be-released version of a target application program according to a virtual user;
and testing the version to be released according to the first test case to obtain a first threshold value.
A second test module to:
generating a second test case of the released version of the target application program according to the user data;
and testing according to the released version of the second test case test to obtain a second threshold value.
A determination module to determine a target value based on the first threshold and the second threshold.
And the association storage module is used for associating and storing the virtual user and the target application program if the target value meets a second preset condition.
Optionally, the associated storage module is specifically configured to perform:
if the target value meets a second preset condition, determining a target parameter according to the virtual user;
comparing target parameters with preset parameters of the target application program;
and if the target parameters are consistent with the preset parameters, performing associated storage on the virtual user and the target application program.
Optionally, the virtual user generating device further includes:
the training module is used for acquiring training user data and inputting the training user data into a convolutional neural model to be trained for recognition to obtain each training sample set;
inputting the training sample set into a recurrent neural network model to be trained to obtain a loss value;
if the loss value is larger than the preset threshold value, updating the network parameters of the convolutional neural model to be trained and the network parameters of the cyclic neural network model to be trained according to the loss value, and returning to execute the acquisition of training user data;
and if the loss value is less than or equal to the preset threshold value, obtaining the trained convolutional neural network module and the trained cyclic neural network model.
Optionally, the training module is specifically configured to perform:
inputting training user data into a convolution layer in a convolution neural model to be trained for feature extraction to obtain a feature matrix;
determining a first element and a second element in the feature matrix;
carrying out differential operation on the first element and the second element to obtain an operation value;
if the operation value is within the preset threshold interval, updating the convolution kernel of the convolution layer according to the convolution kernel filling principle, and returning to execute the step of inputting the training user data to the convolution layer in the convolution neural model to be trained for feature extraction to obtain a feature matrix;
and if the operation value is not within the preset threshold interval, inputting the characteristic matrix into a full-connection layer in the convolutional neural model to be trained for identification to obtain each training sample set.
The specific implementation process and the corresponding beneficial effects of this embodiment may refer to the above embodiment of the virtual user generation method, which is not described herein again.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, which shows a schematic structural diagram of the electronic device according to the embodiment of the present invention, specifically:
the electronic device may include components such as a processor 601 of one or more processing cores, memory 602 of one or more computer-readable storage media, a power supply 603, and an input unit 604. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 601 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing computer programs and/or modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device. Optionally, processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store computer programs and modules, and the processor 601 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 601 with access to the memory 602.
The electronic device further comprises a power supply 603 for supplying power to the various components, and preferably, the power supply 603 is logically connected to the processor 601 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 603 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 604, and the input unit 604 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 601 in the electronic device loads the executable file corresponding to the process of one or more computer programs into the memory 602 according to the following instructions, and the processor 601 runs the computer program stored in the memory 602, so as to implement various functions, such as:
acquiring user data, and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
inputting the initial sample set into a trained recurrent neural network model, and extracting the dependency relationship among samples at different moments to obtain a time sequence sample set;
and generating the virtual user according to the time series sample set.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
In the method, firstly, the acquired user data is input into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user, and then the initial sample set is input into the trained recurrent neural network model to extract the dependency relationship among samples at different moments to obtain a time series sample set. Because the time sequence sample set obtained through the trained recurrent neural network model contains the dependency relationship among the user data at different moments, the virtual user generated according to the time sequence sample set can more accurately simulate the real user according to the dependency relationship.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a computer program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the computer program.
To this end, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute any one of the virtual user generation methods provided by the embodiment of the present invention. For example, the computer program may perform the steps of:
acquiring user data, and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
inputting the initial sample set into a trained recurrent neural network model, and extracting the dependency relationship among samples at different moments to obtain a time sequence sample set;
and generating the virtual user according to the time series sample set.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
As the computer program stored in the computer-readable storage medium can execute any virtual user generation method provided in the embodiments of the present invention, beneficial effects that can be achieved by any virtual user generation method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
According to an aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the virtual user generation method.
The method, the apparatus, and the computer-readable storage medium for generating a virtual user according to the embodiments of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (13)
1. A method for generating a virtual user, comprising:
acquiring user data, and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
inputting the initial sample set into a trained recurrent neural network model, and extracting the dependency relationship among samples at different moments to obtain a time sequence sample set;
and generating a virtual user according to the time series sample set.
2. The method of claim 1, after said obtaining user data, further comprising:
carrying out Hash operation on the user data to obtain a target Hash value set corresponding to the user data;
correspondingly, the inputting the user data into the trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user includes:
and inputting the target hash value set into a trained convolutional neural model to obtain an initial sample set corresponding to each real user.
3. The method of claim 2, wherein the performing the hash operation on the user data to obtain the target set of hash values corresponding to the user data comprises:
carrying out Hash operation on the user data to obtain an initial Hash value set;
checking a target hash value in the initial hash value set;
if the first target hash value with errors exists, searching a second target hash value with the same field as the first target hash value from the initial hash value set;
and if the time interval between the first target hash value and the second target hash value meets a first preset condition, combining the first target hash value and the second target hash value to obtain the target hash value set.
4. The method of claim 2, wherein the user data includes user behavior data and user attribute data, and the hashing the user data to obtain a target hash value set corresponding to the user data includes:
performing hash operation on the user behavior data and the user attribute data to obtain a first sub-hash value corresponding to the user attribute data and a second sub-hash value corresponding to the user behavior data;
and extracting the characteristic value of the first sub-hash value, and splicing the characteristic value and the second sub-hash value to obtain a target hash value, wherein the target hash value constitutes the target hash value set.
5. The method of claim 1, wherein after inputting the initial sample set into the trained recurrent neural network model to extract the dependency relationship between the samples at different time instants, obtaining a time series sample set, further comprising:
quantizing the time sequence sample set according to a quantization algorithm to obtain a target quantized sample set;
accordingly, generating a virtual user from the set of time series samples comprises:
and generating a virtual user according to the target quantization sample set.
6. The method of claim 5, wherein the quantizing the set of time series samples according to a quantization algorithm to obtain a target set of quantized samples comprises:
quantizing the time sequence sample set according to a quantization algorithm to obtain an initial quantized sample set;
according to a local comparison algorithm, locally matching the time sequence sample set with the initial quantization sample set;
and taking the initial quantized sample set successfully matched as a target quantized sample set.
7. The method of claim 1, after generating a virtual user from the set of time series samples, further comprising:
generating a first test case of a to-be-released version of a target application program according to the virtual user;
testing the version to be released according to the first test case to obtain a first threshold value;
generating a second test case of the released version of the target application program according to the user data;
testing the released version according to the second test case to obtain a second threshold value;
determining a target value according to the first threshold value and the second threshold value;
and if the target value meets a second preset condition, performing associated storage on the virtual user and the target application program.
8. The method of claim 7, wherein the associating and storing the virtual user with the target application if the target value satisfies a second preset condition comprises:
if the target value meets a second preset condition, determining a target parameter according to the virtual user;
comparing the target parameters with preset parameters of the target application program;
and if the target parameters are consistent with the preset parameters, performing associated storage on the virtual user and the target application program.
9. The method of claim 1, further comprising, prior to said inputting said user data into a trained convolutional neural model for recognition:
acquiring training user data, and inputting the training user data into a convolutional neural model to be trained for recognition to obtain each training sample set;
inputting the training sample set into a to-be-trained recurrent neural network model to obtain a loss value;
if the loss value is larger than a preset threshold value, updating the network parameters of the convolutional neural model to be trained and the network parameters of the cyclic neural network model to be trained according to the loss value, and returning to execute to acquire training user data;
and if the loss value is less than or equal to a preset threshold value, obtaining the trained convolutional neural network module and the trained cyclic neural network model.
10. The method of claim 9, wherein the inputting the training user data into the convolutional neural model to be trained for recognition, resulting in each training sample set, comprises:
inputting the training user data into a convolution layer in a convolution neural model to be trained for feature extraction to obtain a feature matrix;
determining a first element and a second element in the feature matrix;
carrying out differential operation on the first element and the second element to obtain an operation value;
if the operation value is within a preset threshold interval, updating the convolution kernel of the convolution layer according to a convolution kernel filling principle, and returning to execute the step of inputting the training user data to the convolution layer in the convolutional neural model to be trained for feature extraction to obtain a feature matrix;
and if the operation value is not within a preset threshold interval, inputting the characteristic matrix into a full-connection layer in the convolutional neural model to be trained for identification to obtain each training sample set.
11. An apparatus for generating a virtual user, comprising
The first input module is used for acquiring user data and inputting the user data into a trained convolutional neural model for recognition to obtain an initial sample set corresponding to each real user;
the second input module is used for inputting the initial sample set into a trained recurrent neural network model, extracting the dependency relationship among samples at different moments and obtaining a time sequence sample set;
and the generating module is used for generating a virtual user according to the time series sample set.
12. An electronic device comprising a memory and a processor; the memory stores a computer program, and the processor is configured to execute the computer program in the memory to perform the virtual user generation method according to any one of claims 1 to 10.
13. A storage medium, characterized in that it stores a computer program adapted to be loaded by a processor to execute the method of generating a virtual user according to any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111045204.3A CN113836005A (en) | 2021-09-07 | 2021-09-07 | Virtual user generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111045204.3A CN113836005A (en) | 2021-09-07 | 2021-09-07 | Virtual user generation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113836005A true CN113836005A (en) | 2021-12-24 |
Family
ID=78958584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111045204.3A Pending CN113836005A (en) | 2021-09-07 | 2021-09-07 | Virtual user generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113836005A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116702834A (en) * | 2023-08-04 | 2023-09-05 | 深圳市智慧城市科技发展集团有限公司 | Data generation method, data generation device, and computer-readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304324A (en) * | 2018-01-22 | 2018-07-20 | 百度在线网络技术(北京)有限公司 | Method for generating test case, device, equipment and storage medium |
KR20180084576A (en) * | 2017-01-17 | 2018-07-25 | 경북대학교 산학협력단 | Artificial agents and method for human intention understanding based on perception-action connected learning, recording medium for performing the method |
CN109995601A (en) * | 2017-12-29 | 2019-07-09 | 中国移动通信集团上海有限公司 | A kind of network flow identification method and device |
CN110162700A (en) * | 2019-04-23 | 2019-08-23 | 腾讯科技(深圳)有限公司 | The training method of information recommendation and model, device, equipment and storage medium |
CN110210883A (en) * | 2018-05-09 | 2019-09-06 | 腾讯科技(深圳)有限公司 | The recognition methods of team control account, device, server and storage medium |
US20190324781A1 (en) * | 2018-04-24 | 2019-10-24 | Epiance Software Pvt. Ltd. | Robotic script generation based on process variation detection |
CN111538668A (en) * | 2020-04-28 | 2020-08-14 | 济南浪潮高新科技投资发展有限公司 | Mobile terminal application testing method, device, equipment and medium based on reinforcement learning |
US20210081302A1 (en) * | 2019-09-17 | 2021-03-18 | International Business Machines Corporation | Automated software testing using simulated user personas |
-
2021
- 2021-09-07 CN CN202111045204.3A patent/CN113836005A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180084576A (en) * | 2017-01-17 | 2018-07-25 | 경북대학교 산학협력단 | Artificial agents and method for human intention understanding based on perception-action connected learning, recording medium for performing the method |
CN109995601A (en) * | 2017-12-29 | 2019-07-09 | 中国移动通信集团上海有限公司 | A kind of network flow identification method and device |
CN108304324A (en) * | 2018-01-22 | 2018-07-20 | 百度在线网络技术(北京)有限公司 | Method for generating test case, device, equipment and storage medium |
US20190324781A1 (en) * | 2018-04-24 | 2019-10-24 | Epiance Software Pvt. Ltd. | Robotic script generation based on process variation detection |
CN110210883A (en) * | 2018-05-09 | 2019-09-06 | 腾讯科技(深圳)有限公司 | The recognition methods of team control account, device, server and storage medium |
CN110162700A (en) * | 2019-04-23 | 2019-08-23 | 腾讯科技(深圳)有限公司 | The training method of information recommendation and model, device, equipment and storage medium |
US20210081302A1 (en) * | 2019-09-17 | 2021-03-18 | International Business Machines Corporation | Automated software testing using simulated user personas |
CN111538668A (en) * | 2020-04-28 | 2020-08-14 | 济南浪潮高新科技投资发展有限公司 | Mobile terminal application testing method, device, equipment and medium based on reinforcement learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116702834A (en) * | 2023-08-04 | 2023-09-05 | 深圳市智慧城市科技发展集团有限公司 | Data generation method, data generation device, and computer-readable storage medium |
CN116702834B (en) * | 2023-08-04 | 2023-11-03 | 深圳市智慧城市科技发展集团有限公司 | Data generation method, data generation device, and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111667056B (en) | Method and apparatus for searching model structures | |
CN111581092B (en) | Simulation test data generation method, computer equipment and storage medium | |
CN112632226B (en) | Semantic search method and device based on legal knowledge graph and electronic equipment | |
CN111125658B (en) | Method, apparatus, server and storage medium for identifying fraudulent user | |
CN110728313B (en) | Classification model training method and device for intention classification recognition | |
CN111382260A (en) | Method, device and storage medium for correcting retrieved text | |
CN108960574A (en) | Quality determination method, device, server and the storage medium of question and answer | |
CN111694937A (en) | Interviewing method and device based on artificial intelligence, computer equipment and storage medium | |
CN111444956A (en) | Low-load information prediction method and device, computer system and readable storage medium | |
US20230004979A1 (en) | Abnormal behavior detection method and apparatus, electronic device, and computer-readable storage medium | |
CN109800147A (en) | A kind of test cases generation method and terminal device | |
CN112214595A (en) | Category determination method, device, equipment and medium | |
CN115358397A (en) | Parallel graph rule mining method and device based on data sampling | |
CN114662676A (en) | Model optimization method and device, electronic equipment and computer-readable storage medium | |
CN113836005A (en) | Virtual user generation method and device, electronic equipment and storage medium | |
CN113413607A (en) | Information recommendation method and device, computer equipment and storage medium | |
CN113392220A (en) | Knowledge graph generation method and device, computer equipment and storage medium | |
CN116186219A (en) | Man-machine dialogue interaction method, system and storage medium | |
CN112463964B (en) | Text classification and model training method, device, equipment and storage medium | |
CN116415624A (en) | Model training method and device, and content recommendation method and device | |
KR102324196B1 (en) | System and method for consolidating knowledge base | |
CN113704519A (en) | Data set determination method and device, computer equipment and storage medium | |
CN112132367A (en) | Modeling method and device for enterprise operation management risk identification | |
CN114065640B (en) | Data processing method, device, equipment and storage medium of federal tree model | |
CN117194275B (en) | Automatic software automatic test plan generation method and system based on intelligent algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |