CN116049733A - Neural network-based performance evaluation method, system, equipment and storage medium - Google Patents

Neural network-based performance evaluation method, system, equipment and storage medium Download PDF

Info

Publication number
CN116049733A
CN116049733A CN202310048829.8A CN202310048829A CN116049733A CN 116049733 A CN116049733 A CN 116049733A CN 202310048829 A CN202310048829 A CN 202310048829A CN 116049733 A CN116049733 A CN 116049733A
Authority
CN
China
Prior art keywords
neural network
parameters
preset
initial training
performance evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310048829.8A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN202310048829.8A priority Critical patent/CN116049733A/en
Publication of CN116049733A publication Critical patent/CN116049733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention provides a performance evaluation method, a system, equipment and a storage medium based on a neural network. The neural network-based efficacy evaluation method comprises the following steps: acquiring a preset neural network model; acquiring preset parameters, and configuring parameters of each neuron in a preset neural network model according to the preset parameters; acquiring a training sample, and modifying parameters in a preset neural network model according to the training sample to reduce the loss function value and acquire a performance evaluation model; and obtaining detection parameters, and inputting the detection parameters into a performance evaluation model to obtain a performance evaluation result. The method and the device are beneficial to improving accuracy of efficiency evaluation.

Description

Neural network-based performance evaluation method, system, equipment and storage medium
Technical Field
The present invention relates to the field of performance evaluation, and in particular, to a method, a system, an apparatus, and a storage medium for performance evaluation based on a neural network.
Background
Efficacy assessment has involved various stages of product development. Generally, performance refers to the probability that a system or device will meet operational requirements under specified operating conditions and for a specified period of time.
The performance evaluation method is very wide, including analytical method, exponential method, statistical method, computer simulation method, and research method. Efficacy assessment involves not only the theory, method of analytical assessment, but also the need analysis and design descriptions, and issues of system architecture and model architecture framework. As can be seen, efficacy assessment is a very broad, very difficult study task.
Disclosure of Invention
The invention provides a neural network-based performance evaluation method, a system, equipment and a storage medium, which are beneficial to obtaining accurate performance evaluation results.
In a first aspect, a method for evaluating performance based on a neural network is provided, including:
acquiring a preset neural network model;
acquiring preset parameters, and configuring parameters of each neuron in a preset neural network model according to the preset parameters;
acquiring a training sample, and modifying parameters in the preset neural network model according to the training sample to reduce a loss function value and obtain a performance evaluation model;
and obtaining detection parameters, and inputting the detection parameters into a performance evaluation model to obtain a performance evaluation result.
In one embodiment, the neural network-based performance evaluation method, wherein the obtaining the preset parameter includes:
determining parameters corresponding to neurons in a preset neural network model to obtain parameter information;
generating a plurality of sets of parameter values according to the parameter information and a preset parameter generation rule, and respectively arranging each set of parameter values according to a preset sequence to obtain a plurality of solution vectors;
establishing an objective function, wherein the larger the objective function value of the solution vector is, the closer the solution vector is to an optimal solution vector;
iteratively evolving the plurality of solution vectors for a plurality of times according to the objective function to obtain a plurality of iteratively evolved solution vectors;
selecting a solution vector with the maximum objective function value from the iteratively evolved solution vectors to obtain an optimized solution vector;
determining the preset parameters according to the parameter values in the optimized solution vector;
the iterative evolution includes:
obtaining a plurality of solution vectors of the current cycle, determining a selection probability according to the objective function value of each solution vector, and selecting a preset number of solution vectors according to the selection probability to obtain a plurality of objective solution vectors, wherein the selection probability is positively correlated with the objective function value;
exchanging corresponding parameter values in the two selected target solution vectors according to the cross probability in the plurality of target solution vectors to obtain a plurality of exchanged target solution vectors, wherein the cross probability is inversely related to the target function value;
and replacing the parameter values in the target solution vectors with new parameter values according to the replacement probability in the plurality of exchanged target solution vectors to obtain a plurality of solution vectors of the next cycle.
In one embodiment, the neural network-based performance evaluation method, wherein the acquiring the training sample includes:
acquiring an initial training sample, analyzing the initial training sample, and determining the characteristics of the initial training sample;
generating a new sample according to the initial training sample characteristics;
and obtaining the training sample based on the initial training sample and the new sample.
In one embodiment, the neural network-based performance evaluation method, wherein acquiring an initial training sample, analyzing the initial training sample, and determining initial training sample characteristics includes:
clustering is carried out on the initial training samples to obtain a plurality of categories of initial training samples;
and determining initial training sample characteristics according to the categories of the initial training samples.
In one embodiment, the neural network-based performance evaluation method, wherein generating a new sample according to the initial training sample feature includes:
acquiring the number of initial training samples in each category, and respectively calculating the ratio of the number of the initial training samples to the total number of the initial training samples in each category to obtain the duty ratio of each category;
and copying the initial training samples randomly from the initial training samples of each category according to the duty ratio of each category to obtain new samples.
In one embodiment, the neural network-based performance evaluation method, wherein generating a new sample according to the initial training sample feature includes:
copying initial training samples of each category respectively to obtain intermediate samples;
respectively calculating the variances corresponding to the intermediate samples of each category to obtain category variances;
obtaining a random number for each intermediate sample, wherein the average value of the absolute values of the random numbers is smaller than or equal to the class variance corresponding to the intermediate sample;
and adding the intermediate sample with the corresponding random number to obtain a new sample.
In one embodiment, the neural network-based performance evaluation method, wherein the modifying parameters in the preset neural network model according to the training sample, so that the loss function value is reduced, obtains a performance evaluation model, includes:
dividing the training samples into a first type of samples and a second type of samples;
inputting a first type sample into the preset neural network model to obtain a plurality of first output values;
determining a plurality of first loss function values according to the plurality of first output values and corresponding real values respectively, and adjusting parameters in the preset neural network model according to the direction in which the first loss function values are reduced;
inputting a second type sample into the neural network model to obtain a plurality of second output values;
and determining a second loss function value according to the average value of the second output values and the average value of the corresponding real values, and adjusting parameters in the preset neural network model according to the direction in which the second loss function value is reduced.
In a second aspect, a neural network-based performance evaluation system is provided, comprising:
the acquisition module is used for acquiring a preset neural network model, acquiring preset parameters, acquiring training samples and acquiring detection parameters;
the configuration module is used for configuring parameters of each neuron in the preset neural network model according to the preset parameters;
the training module is used for modifying parameters in the preset neural network model according to the training sample so as to reduce the loss function value and obtain a performance evaluation model;
the evaluation module is used for inputting the detection parameters into the efficiency evaluation model to obtain an efficiency evaluation result.
In a third aspect, an electronic device is provided, comprising a memory and a processor, the memory storing a computer program executable on the processor, wherein the steps of the neural network based performance evaluation method as described above are implemented when the processor executes the program.
In a fourth aspect, a storage medium is provided, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of a neural network based performance evaluation method as described above.
According to the invention, the efficiency evaluation model is obtained by training the preset neural network model, and then the efficiency evaluation is completed through the efficiency evaluation model. According to the invention, the efficiency evaluation model is adopted to evaluate the efficiency, so that errors caused by subjective evaluation indexes are reduced, and the accuracy of the efficiency evaluation is improved.
Drawings
Various other advantages and benefits of the present invention will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. It is evident that the figures described below are only some embodiments of the invention, from which other figures can be obtained without inventive effort for a person skilled in the art.
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a neural network based performance evaluation method according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network based performance evaluation system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Description of the embodiments
It should be noted that, without conflict, the embodiments and features of the embodiments in the present application may be combined with each other, and the present invention will be further described in detail with reference to the drawings and the specific embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
Examples
Fig. 1 is a flow chart of a neural network-based performance evaluation method in the present embodiment. Referring to fig. 1, the neural network-based performance evaluation method includes: step 10, step 20, step 30 and step 40.
And step 10, acquiring a preset neural network model.
The preset neural network model is a neural network based on deep learning. Optionally, the preset neural network model is a BP neural network. The BP neural network structure is composed of an input layer, an output layer and a plurality of hidden layers. In a three-layer BP network, the input layer contains x neurons, the hidden layer contains y neurons, the output layer contains z neurons, and x, y and z are positive integers. It has been demonstrated that a three-layer BP neural network can fit any complex function by adjusting the number of neurons in the hidden layer.
And step 20, acquiring preset parameters, and configuring parameters of each neuron in the preset neural network model according to the preset parameters.
In this embodiment, each initial parameter in the preset neural network model is not random, but a preset parameter. Specifically, parameters of each neuron in the preset neural network model are modified into corresponding preset parameters, so that the preset neural network model with optimized initial parameters is obtained, and then the preset neural network model is trained on the basis of the preset neural network model with optimized initial parameters.
Alternatively, corresponding parameter acquisition rules may be determined according to different purposes, and preset parameters may be obtained according to the parameter acquisition rules. For example, based on the purpose of searching optimal parameters in the global, preset parameters are generated through a certain parameter generation rule, and a preset neural network model is configured, so that the defect of local minimization is reduced. For another example, based on the purpose of improving the convergence rate, a preset parameter is generated through a certain parameter generation rule, and a preset neural network model is configured, so that the training convergence rate is improved.
For example, a simpler neuron can be formulated as follows: y=wx+b, where y is the output value of the neuron, x is the input value, w is the weight, and b is the threshold. The preset parameters are the weight w and the threshold b, that is, by setting the initial weight w and the initial threshold b in the preset neural network model as the preset parameters.
And step 30, acquiring a training sample, and modifying parameters in a preset neural network model according to the training sample to reduce the loss function value and obtain the efficiency evaluation model.
The loss function value is used for the degree of deviation between the true value and the predicted value. The larger the difference between the true value and the predicted value is, the larger the loss function value is; the smaller the difference between the true value and the predicted value, the smaller the loss function value.
The modification of the parameters in the preset neural network model according to the training sample is to train the preset neural network model for a plurality of times by adopting the training sample, change the preset neural network model for a plurality of times, and continuously optimize the parameters in the preset neural network model, so that the phase difference between the real value and the predicted value is continuously reduced.
Training the preset neural network model by using a training sample, and modifying parameters in the preset neural network model to enable the predicted value to be more approximate to the true value, wherein the corresponding loss function value is continuously reduced, and finally obtaining the trained neural network model, namely the efficiency evaluation model.
And step 40, acquiring detection parameters, and inputting the detection parameters into a performance evaluation model to obtain a performance evaluation result.
The detection parameter is usually test data or simulation data of the evaluation object, and is used for evaluating the efficacy of the evaluation object. Alternatively, the detection parameter may be an estimated performance parameter, for example in a vehicle transportation effectiveness evaluation, the detection parameter may be a motor performance parameter of the vehicle. And inputting the detection parameters into the efficiency evaluation model, and outputting to obtain an efficiency evaluation result.
Furthermore, in some scenarios, some performance assessment models may also require the input of index parameters. The efficiency evaluation model can analyze the efficiency of the evaluation object according to the index parameter and the detection parameter, and output the efficiency evaluation result.
According to the invention, the efficiency evaluation model is obtained by training the preset neural network model, and then the efficiency evaluation is completed through the efficiency evaluation model. According to the invention, the efficiency evaluation model is adopted to evaluate the efficiency, so that errors caused by subjective evaluation indexes are reduced, and the accuracy of the efficiency evaluation is improved.
In one embodiment, obtaining the preset parameter includes: step 210, step 220, step 230, step 240, step 250, step 260.
Step 210, determining parameters corresponding to neurons in a preset neural network model to obtain parameter information.
And determining parameters which are required to be optimized in a preset neural network model, and sorting the parameters required to be optimized to obtain parameter information. The format of the parameter information may be a set in which the respective parameters are arranged in a preset order.
And 220, generating a plurality of sets of parameter values according to the parameter information and a preset parameter generation rule, and respectively arranging each set of parameter values according to a preset sequence to obtain a plurality of solution vectors.
Each set of parameter values corresponds to all parameter values of the preset neural network model. The sets of parameter values may be from 5 to 1000 sets, such as 30 sets.
The preset parameter generation rule may be to randomly generate multiple sets of parameter values in the parameter value range, or generate multiple sets of parameter values in the parameter value range according to an arithmetic rule.
Optionally, in the process of arranging the plurality of parameter values in each set of parameter values in a preset sequence, the plurality of parameter values are ordered according to positions of neurons in a preset neural network model. For example, parameters of the input layer are prioritized, parameters of the hidden layer are ranked in the middle, and parameters of the output layer are ranked last.
The solution vector may take the form: [ x1, x2, x3, y1, y2, y3, z1, z2, z3]. Where x1, x2, x3 represent three parameters in the input layer, y1, y2, y3 represent three parameters in the hidden layer, and z1, z2, z3 represent three parameters in the output layer. x1, x2, x3, y1, y2, y3, z1, z2, z3 are randomly generated within the respective range of values. It will be appreciated that a solution vector corresponds to a set of parameter values.
And 230, establishing an objective function, wherein the larger the objective function value of the solution vector is, the closer the solution vector is to the optimal solution vector.
The objective function is used to evaluate the solution vector, typically non-negative. For example, the higher the score for a solution vector, the closer the solution vector is to the optimal solution vector.
Alternatively, when the optimal solution vector is the solution vector with the largest respective parameter value, the objective function may be the sum of squares of the respective parameter values.
Of course, in some other embodiments, an objective function is established, wherein the smaller the objective function value of the solution vector, the closer the solution vector is to the optimal solution vector.
Step 240, performing iterative evolution on the plurality of solution vectors for a plurality of times according to the objective function, so as to obtain a plurality of iteratively evolved solution vectors.
In the iterative evolution process, the operations such as crossing and mutation are carried out on the solution vectors to continuously generate new vector solutions, and the solution vectors are preferentially selected according to the objective function to obtain a plurality of solution vectors after iterative evolution.
And 250, selecting a solution vector with the largest objective function value from the multiple solution vectors after iterative evolution to obtain an optimized solution vector.
And calculating the objective function value of each solution vector, and selecting the maximum solution vector as an optimized solution vector.
Step 260, determining a preset parameter according to the parameter value in the optimized solution vector.
It can be understood that the parameter value in the optimized solution vector is the preset parameter.
Optionally, the iterative evolution comprises: step 241, step 242 and step 243.
Step 241, obtaining a plurality of solution vectors of the current cycle, determining a selection probability according to the objective function value of each solution vector, and selecting a preset number of solution vectors according to the selection probability to obtain a plurality of objective solution vectors, wherein the selection probability is positively correlated with the objective function value.
The plurality of solution vectors of the current loop may be the solution vector of the previous loop output or the initial solution vector.
And selecting partial solution vectors from the multiple solution vectors of the current cycle according to the selection probability to obtain multiple target solution vectors. That is, the number of target solution vectors is less than the number of solution vectors of the current cycle.
The larger the objective function value, the larger the selection probability. Optionally, the selection probability is proportional to the objective function value.
For example, the selection probability is as follows:
P i for the selection probability of the ith solution vector, T i The objective function value of the ith solution vector, n is the total number of solution vectors, T j Is the objective function value of the j-th solution vector.
And step 242, exchanging corresponding parameter values in the two selected target solution vectors according to the cross probability in the plurality of target solution vectors to obtain a plurality of exchanged target solution vectors, wherein the cross probability is inversely related to the target function value.
And pairing the target solution vectors in pairs, and exchanging parameter values of the paired target solution vectors.
Optionally, in the pairing process of the target solution vectors, the probability of successful pairing of the target solution vectors is determined according to the objective function value. The closer the objective function values of the two objective solution vectors are, the greater the pairing success probability is. Wherein the success probability is less than 1.
Alternatively, the crossover probability is set as desired. In the process of exchanging the corresponding parameter values in the two selected target solution vectors, firstly judging whether each parameter of the target solution vectors is exchanged according to the cross probability, and then bit-replacing the exchanged parameter.
Optionally, the crossover probability is inversely related to the objective function value. The crossover probability is inversely proportional to the objective function value. For solution vectors with larger objective function values, parameter exchange is less likely to occur.
And 243, replacing the parameter values in the target solution vectors with new parameter values according to the replacement probability in the plurality of target solution vectors after the exchange, so as to obtain a plurality of solution vectors of the next cycle.
Alternatively, the new parameter values are randomly generated within the corresponding parameter ranges or calculated using the following formula.
A=[α×A a ×(1-P)+β×A b ×P]÷2
Wherein A is a new parameter value, alpha is a first weight coefficient, beta is a second weight coefficient, A a For randomly generated parameter values within the corresponding parameter range, P is the probability of selection, A b Is the original parameter value; alpha+beta=1, 0.ltoreq.alpha.ltoreq.1, 0.ltoreq.beta.ltoreq.1, e.g. 0.1.ltoreq.alpha.ltoreq.0.9, alpha and beta being set as required, or a function calculation formula.
Alternatively, the substitution probability may be randomly generated or inversely related to the objective function value. The probability of replacement is inversely proportional to the objective function value. For solution vectors with larger objective function values, parameter substitution is less likely to occur.
Alternatively, the replacement probability is set as needed. In the process of exchanging the corresponding parameter values in the two selected target solution vectors, firstly judging whether each parameter of the target solution vectors is replaced according to the replacement probability, replacing the parameter which is replaced by the new parameter value, and keeping the rest parameters unchanged.
In one embodiment, the neural network-based performance evaluation method, wherein obtaining the training sample includes: step 310, step 320, step 330.
Step 310, an initial training sample is obtained, and the initial training sample is analyzed to determine the characteristics of the initial training sample.
The initial training samples are raw data obtained when an evaluation object is tested or detected, such as vehicle maneuver performance parameters.
Optionally, the same type of parameters in the initial training sample are analyzed, and distribution characteristics of the parameters or other parameter characteristics are determined, so as to obtain characteristics of the initial training sample, for example, the parameters in the initial training sample show normal distribution characteristics.
Step 320, generating a new sample according to the initial training sample characteristics.
A new sample is generated based on the initial training sample characteristics such that the new sample has characteristics that are the same as or similar to the initial training sample. For example, the initial training sample exhibits a normal distribution of features, and then the new sample formed also exhibits a normal distribution of features.
Step 330, obtaining a training sample based on the initial training sample and the new sample.
Optionally, the initial training sample and the new sample are combined to form a training sample.
Optionally, the initial training samples are taken as a group of samples, and the new samples are divided into one or more groups of new samples which are grouped, such as a second group of samples and a third group of samples. Combining the initial training samples and the new grouped samples according to the groups to form grouped training samples.
In one embodiment, obtaining an initial training sample, analyzing the initial training sample, and determining characteristics of the initial training sample includes: step 311, step 312.
Step 311, clustering the initial training samples to obtain multiple classes of initial training samples.
And performing cluster analysis on the parameters of the same type in the initial training sample to finish the classification of the initial training sample. For example, clustering is performed according to the similarity between the initial training samples, and the initial training samples with high similarity are classified into the same class.
Step 312, determining the characteristics of the initial training sample according to the category of the initial training sample.
The initial training sample features are the classification condition of the initial training sample. For example, the initial training sample feature is a feature having five classifications, and a sample number distribution of the five classifications.
In one embodiment, generating a new sample based on the initial training sample characteristics includes: step 321 and step 322.
Step 321, obtaining the number of initial training samples in each category, and respectively calculating the ratio of the number of initial training samples in each category to the total number of initial training samples to obtain the duty ratio of each category.
Alternatively, the calculation formula of the duty ratio of each category is as follows:
wherein B is the duty ratio of the kth category, N k The number of initial training samples of the kth category is N, and the total number of initial training samples is N.
And 322, copying the initial training samples randomly from the initial training samples of each category according to the duty ratio of each category to obtain new samples.
For example, the number of new samples of each category is calculated according to the duty ratio of each category, and then the new samples are copied from the initial training samples of the corresponding category according to the number of the new samples of each category, so as to obtain the new samples. The duty cycle of each class of the new sample is the same as the duty cycle of each class of the initial training sample.
That is, the new sample has the same or similar class characteristics as the initial training sample.
In one embodiment, generating a new sample based on the initial training sample characteristics includes: step 323, step 324, step 325, step 326.
Step 323, copying the initial training samples of each category respectively to obtain intermediate samples.
Alternatively, the intermediate samples may be obtained in the manner of step 321 and step 322, to obtain intermediate samples having the same characteristics as the initial training samples; the initial training sample can be directly copied to obtain an intermediate sample which is identical to the initial training sample.
Step 324, calculating the variances corresponding to the intermediate samples of each category respectively to obtain category variances.
The classification of the intermediate samples can be obtained according to the replication condition and the classification condition of the initial training samples. And then, calculating the variance corresponding to the intermediate samples of each category to obtain category variances.
Step 325, a random number is obtained for each intermediate sample, and the average value of the absolute values of the random numbers is smaller than or equal to the class variance corresponding to the intermediate sample.
Wherein, the generation formula of the random number is as follows.
∣R m ∣≤δ n
Wherein R is m Random number, delta, being the mth intermediate sample n Is the category variance of the nth category.
Step 326, adding the intermediate sample and the corresponding random number to obtain a new sample.
Specifically, the mth intermediate sample and the corresponding random number R m And adding to obtain a new sample. It can be seen that the new samples may be different from the initial training samples.
In one embodiment, modifying parameters in a preset neural network model according to a training sample to reduce a loss function value and obtain a performance evaluation model, including:
step 341, dividing the training samples into a first type of samples and a second type of samples.
Optionally, the first type of sample is 80% of the training samples and the second type of sample is 20% of the training samples.
Step 342, inputting the first type of samples into a preset neural network model to obtain a plurality of first output values.
The first type of samples can be divided into a plurality of batches, and the batches are input into a preset neural network model. Each first type sample correspondingly obtains a first output value.
And 343, determining a plurality of first loss function values according to the plurality of first output values and the corresponding real values, and adjusting parameters in the preset neural network model according to the direction in which the first loss function values decrease.
The first loss function value is a plurality of first loss function values calculated by substituting the plurality of first output values and corresponding real values into the loss function.
And 344, inputting the second type of samples into the neural network model to obtain a plurality of second output values.
The second type of sample can be divided into a plurality of batches, and the batches are input into a preset neural network model. Each second type sample correspondingly obtains a second output value.
And step 345, determining a second loss function value according to the average value of the second output values and the average value of the corresponding real values, and adjusting parameters in the preset neural network model according to the direction in which the second loss function value is reduced.
That is, the average value of the second output value and the average value of the plurality of corresponding real values are substituted into the loss function, and the calculated second loss function value is calculated. Alternatively, an average value of the output values of the respective packets may be obtained by the packets, so that a plurality of second loss function values may be determined.
And adjusting parameters in the preset neural network model to enable the loss function to be converged, so that training of the preset neural network model is completed, and a performance evaluation model is obtained.
Examples
Fig. 2 is a schematic structural diagram of a performance evaluation system based on a neural network according to the present embodiment, as shown in fig. 2, the performance evaluation system 50 based on a neural network includes: an acquisition module 501, a configuration module 502, a training module 503, and an evaluation module 504.
The acquiring module 501 is configured to acquire a preset neural network model, acquire preset parameters, acquire a training sample, and acquire detection parameters.
The configuration module 502 is configured to configure parameters of each neuron in the preset neural network model according to the preset parameters.
The training module 503 is configured to modify parameters in a preset neural network model according to the training sample, so that the loss function value is reduced, and a performance evaluation model is obtained.
The evaluation module 504 is configured to input the detection parameter into the performance evaluation model to obtain a performance evaluation result.
According to the invention, the efficiency evaluation model is obtained by training the preset neural network model, and then the efficiency evaluation is completed through the efficiency evaluation model. According to the invention, the efficiency evaluation model is adopted to evaluate the efficiency, so that errors caused by subjective evaluation indexes are reduced, and the accuracy of the efficiency evaluation is improved.
In one embodiment, the obtaining module 501 is further configured to determine a parameter corresponding to a neuron in the preset neural network model, so as to obtain parameter information; generating a plurality of sets of parameter values according to the parameter information and a preset parameter generation rule, and respectively arranging each set of parameter values according to a preset sequence to obtain a plurality of solution vectors; establishing an objective function, wherein the larger the objective function value of the solution vector is, the closer the solution vector is to the optimal solution vector; iteratively evolving the plurality of solution vectors for a plurality of times according to the objective function to obtain a plurality of iteratively evolved solution vectors; selecting a solution vector with the maximum objective function value from the iteratively evolved solution vectors to obtain an optimized solution vector; and determining preset parameters according to the parameter values in the optimized solution vector.
Optionally, the obtaining module 501 is further configured to obtain a plurality of solution vectors of the current cycle, determine a selection probability according to an objective function value of each solution vector, and select a preset number of solution vectors according to the selection probability to obtain a plurality of objective solution vectors, where the selection probability is positively related to the objective function value; exchanging corresponding parameter values in the two selected target solution vectors according to the cross probability in the plurality of target solution vectors to obtain a plurality of exchanged target solution vectors, wherein the cross probability is inversely related to the target function value; and replacing the parameter values in the target solution vectors with new parameter values according to the replacement probability in the plurality of the target solution vectors after the exchange to obtain a plurality of solution vectors of the next cycle.
In one embodiment, the obtaining module 501 is further configured to obtain an initial training sample, analyze the initial training sample, and determine characteristics of the initial training sample; generating a new sample according to the initial training sample characteristics; based on the initial training sample and the new sample, a training sample is obtained.
In one embodiment, the obtaining module 501 is further configured to perform clustering processing on the initial training samples to obtain multiple types of initial training samples; and determining initial training sample characteristics according to the categories of the initial training samples.
In one embodiment, the obtaining module 501 is further configured to obtain the number of initial training samples in each category, and calculate a ratio of the number of initial training samples to the total number of initial training samples in each category, so as to obtain a duty ratio of each category; and copying the initial training samples randomly from the initial training samples of each category according to the duty ratio of each category to obtain new samples.
In one embodiment, the obtaining module 501 is further configured to copy the initial training samples of each class to obtain intermediate samples; respectively calculating the variances corresponding to the intermediate samples of each category to obtain category variances; obtaining a random number for each intermediate sample, wherein the average value of the absolute values of the random numbers is smaller than or equal to the class variance corresponding to the intermediate sample; and adding the intermediate sample with the corresponding random number to obtain a new sample.
In one embodiment, the training module 503 is further configured to divide the training samples into a first type of samples and a second type of samples; inputting the first type sample into a preset neural network model to obtain a plurality of first output values; determining a plurality of first loss function values according to the plurality of first output values and corresponding real values respectively, and adjusting parameters in a preset neural network model according to the direction in which the first loss function values are reduced; inputting the second type sample into the neural network model to obtain a plurality of second output values; and determining a second loss function value according to the average value of the second output values and the average value of the corresponding real values, and adjusting parameters in the preset neural network model according to the direction in which the second loss function value is reduced.
The performance evaluation system 50 of the neural network in this embodiment is a system corresponding to the performance evaluation method of the neural network. The operation principle of the performance evaluation system 50 of the neural network may refer to the method of the performance evaluation system 50 of the neural network, which is not described herein.
Examples
Fig. 3 is a schematic structural diagram of an electronic device according to the present invention. The electronic device comprises a memory 601 and a processor 602, the memory 601 storing a computer program executable on the processor 602, wherein the steps of the neural network based performance evaluation method as described above are implemented when the processor 602 executes the program.
The electronic device comprises a memory 601, a processor 602, which are communicatively connected to each other via a system bus 603. It should be noted that only electronic devices having components 601-603 are shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the electronic device herein is an electronic device capable of automatically performing numerical calculations and/or information processing according to predetermined or stored instructions, and the hardware thereof includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The electronic device may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, and the like. The device can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or a voice control device.
The memory 601 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory 601 may be an internal storage unit of the device, such as a hard disk or memory of the device. In other embodiments, the memory 601 may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like. Of course, the memory 601 may also include both the internal storage unit of the device and its external storage device. In this embodiment, the memory 601 is typically used to store an operating system and various types of application software installed on the device. In addition, the memory 601 may also be used to temporarily store various types of data that have been output or are to be output.
The processor may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor is typically used to control the overall operation of the device. In this embodiment, the processor is configured to execute computer readable instructions or process data stored in the memory.
Examples
The present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the neural network based performance evaluation method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
It is apparent that the embodiments described above are only some embodiments of the present application, but not all embodiments, the preferred embodiments of the present application are given in the drawings, but not limiting the patent scope of the present application. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a more thorough understanding of the present disclosure. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing, or equivalents may be substituted for elements thereof. All equivalent structures made by the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the protection scope of the application.

Claims (10)

1. A neural network-based performance evaluation method, comprising:
acquiring a preset neural network model;
acquiring preset parameters, and configuring parameters of each neuron in a preset neural network model according to the preset parameters;
acquiring a training sample, and modifying parameters in the preset neural network model according to the training sample to reduce a loss function value and obtain a performance evaluation model;
and obtaining detection parameters, and inputting the detection parameters into a performance evaluation model to obtain a performance evaluation result.
2. The neural network-based performance evaluation method of claim 1, wherein the obtaining the preset parameters comprises:
determining parameters corresponding to neurons in a preset neural network model to obtain parameter information;
generating a plurality of sets of parameter values according to the parameter information and a preset parameter generation rule, and respectively arranging each set of parameter values according to a preset sequence to obtain a plurality of solution vectors;
establishing an objective function, wherein the larger the objective function value of the solution vector is, the closer the solution vector is to an optimal solution vector;
iteratively evolving the plurality of solution vectors for a plurality of times according to the objective function to obtain a plurality of iteratively evolved solution vectors;
selecting a solution vector with the maximum objective function value from the iteratively evolved solution vectors to obtain an optimized solution vector;
determining the preset parameters according to the parameter values in the optimized solution vector;
the iterative evolution includes:
obtaining a plurality of solution vectors of the current cycle, determining a selection probability according to the objective function value of each solution vector, and selecting a preset number of solution vectors according to the selection probability to obtain a plurality of objective solution vectors, wherein the selection probability is positively correlated with the objective function value;
exchanging corresponding parameter values in the two selected target solution vectors according to the cross probability in the plurality of target solution vectors to obtain a plurality of exchanged target solution vectors, wherein the cross probability is inversely related to the target function value;
and replacing the parameter values in the target solution vectors with new parameter values according to the replacement probability in the plurality of exchanged target solution vectors to obtain a plurality of solution vectors of the next cycle.
3. The neural network-based performance evaluation method of claim 1, wherein the acquiring training samples comprises:
acquiring an initial training sample, analyzing the initial training sample, and determining the characteristics of the initial training sample;
generating a new sample according to the initial training sample characteristics;
and obtaining the training sample based on the initial training sample and the new sample.
4. The neural network-based performance evaluation method of claim 1, wherein obtaining an initial training sample and analyzing the initial training sample to determine initial training sample characteristics comprises:
clustering is carried out on the initial training samples to obtain a plurality of categories of initial training samples;
and determining initial training sample characteristics according to the categories of the initial training samples.
5. The neural network-based performance evaluation method of claim 4, wherein generating new samples based on the initial training sample characteristics comprises:
acquiring the number of initial training samples in each category, and respectively calculating the ratio of the number of the initial training samples to the total number of the initial training samples in each category to obtain the duty ratio of each category;
and copying the initial training samples randomly from the initial training samples of each category according to the duty ratio of each category to obtain new samples.
6. The neural network-based performance evaluation method of claim 4, wherein generating new samples based on the initial training sample characteristics comprises:
copying initial training samples of each category respectively to obtain intermediate samples;
respectively calculating the variances corresponding to the intermediate samples of each category to obtain category variances;
obtaining a random number for each intermediate sample, wherein the average value of the absolute values of the random numbers is smaller than or equal to the class variance corresponding to the intermediate sample;
and adding the intermediate sample with the corresponding random number to obtain a new sample.
7. The neural network-based performance evaluation method according to claim 1, wherein modifying parameters in the predetermined neural network model according to the training samples so that a loss function value is reduced, and obtaining a performance evaluation model includes:
dividing the training samples into a first type of samples and a second type of samples;
inputting a first type sample into the preset neural network model to obtain a plurality of first output values;
determining a plurality of first loss function values according to the plurality of first output values and corresponding real values respectively, and adjusting parameters in the preset neural network model according to the direction in which the first loss function values are reduced;
inputting a second type sample into the neural network model to obtain a plurality of second output values;
and determining a second loss function value according to the average value of the second output values and the average value of the corresponding real values, and adjusting parameters in the preset neural network model according to the direction in which the second loss function value is reduced.
8. A neural network-based performance evaluation system, comprising:
the acquisition module is used for acquiring a preset neural network model, acquiring preset parameters, acquiring training samples and acquiring detection parameters;
the configuration module is used for configuring parameters of each neuron in the preset neural network model according to the preset parameters;
the training module is used for modifying parameters in the preset neural network model according to the training sample so as to reduce the loss function value and obtain a performance evaluation model;
the evaluation module is used for inputting the detection parameters into the efficiency evaluation model to obtain an efficiency evaluation result.
9. An electronic device comprising a memory and a processor, the memory storing a computer program executable on the processor, wherein the processor, when executing the program, performs the steps of the neural network-based performance evaluation method of any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the neural network based performance evaluation method of any one of claims 1 to 7.
CN202310048829.8A 2023-02-01 2023-02-01 Neural network-based performance evaluation method, system, equipment and storage medium Pending CN116049733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310048829.8A CN116049733A (en) 2023-02-01 2023-02-01 Neural network-based performance evaluation method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310048829.8A CN116049733A (en) 2023-02-01 2023-02-01 Neural network-based performance evaluation method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116049733A true CN116049733A (en) 2023-05-02

Family

ID=86125228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310048829.8A Pending CN116049733A (en) 2023-02-01 2023-02-01 Neural network-based performance evaluation method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116049733A (en)

Similar Documents

Publication Publication Date Title
CN107040397B (en) Service parameter acquisition method and device
Dejaeger et al. Data mining techniques for software effort estimation: a comparative study
Gong et al. Evolutionary generation of test data for many paths coverage based on grouping
Lee et al. Oracle estimation of a change point in high-dimensional quantile regression
Tsamardinos et al. Towards integrative causal analysis of heterogeneous data sets and studies
CN114418035A (en) Decision tree model generation method and data recommendation method based on decision tree model
CN116109121B (en) User demand mining method and system based on big data analysis
CN113837596B (en) Fault determination method and device, electronic equipment and storage medium
CN112420125A (en) Molecular attribute prediction method and device, intelligent equipment and terminal
CN111461445A (en) Short-term wind speed prediction method and device, computer equipment and storage medium
Tembine Mean field stochastic games: Convergence, Q/H-learning and optimality
CN114692889A (en) Meta-feature training model for machine learning algorithm
Aravazhi Irissappane et al. Filtering unfair ratings from dishonest advisors in multi-criteria e-markets: a biclustering-based approach
CN111582647A (en) User data processing method and device and electronic equipment
CN115858388A (en) Test case priority ordering method and device based on variation model mapping chart
CN113393023B (en) Mold quality evaluation method, apparatus, device and storage medium
CN116049733A (en) Neural network-based performance evaluation method, system, equipment and storage medium
CN113010687B (en) Exercise label prediction method and device, storage medium and computer equipment
CN112528500B (en) Evaluation method and evaluation equipment for scene graph construction model
CN115048290A (en) Software quality evaluation method and device, storage medium and computer equipment
CN114416462A (en) Machine behavior identification method and device, electronic equipment and storage medium
CN113627513A (en) Training data generation method and system, electronic device and storage medium
CN115769194A (en) Automatic data linking across datasets
CN112463964A (en) Text classification and model training method, device, equipment and storage medium
Lyu et al. Optimal burn-in strategy for high reliable products using convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination