CN111461328B - Training method of neural network - Google Patents

Training method of neural network Download PDF

Info

Publication number
CN111461328B
CN111461328B CN202010259862.1A CN202010259862A CN111461328B CN 111461328 B CN111461328 B CN 111461328B CN 202010259862 A CN202010259862 A CN 202010259862A CN 111461328 B CN111461328 B CN 111461328B
Authority
CN
China
Prior art keywords
sample data
antenna performance
neural network
preset
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010259862.1A
Other languages
Chinese (zh)
Other versions
CN111461328A (en
Inventor
陈志熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Starfire Technology Co ltd
Original Assignee
Nanjing Starfire Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Starfire Technology Co ltd filed Critical Nanjing Starfire Technology Co ltd
Priority to CN202010259862.1A priority Critical patent/CN111461328B/en
Publication of CN111461328A publication Critical patent/CN111461328A/en
Application granted granted Critical
Publication of CN111461328B publication Critical patent/CN111461328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a training method of a neural network, which comprises the following steps: inputting task information into a preset learning model to obtain sample data, wherein the task information is used for indicating the characteristics of a sample to be acquired; the method for training the neural network can well guide the neural network to perform autonomous learning, and has high efficiency and good training accuracy.

Description

Training method of neural network
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a training method of a neural network.
Background
In the technical field of artificial intelligence, various functions can be realized by using a neural network, for example, the neural network is used for completing certain signal processing or image recognition, when the neural network is used for processing a target task of a user, the neural network is generally required to be trained first, so that the processing effect of the neural network reaches the requirement standard of the user, a large amount of manual screening sample data is often required in the training process, the training process is low in efficiency and poor in effect, and the accuracy of the neural network is affected.
Disclosure of Invention
In view of the above, one of the technical problems to be solved by the present application is to provide a training method for a neural network, which can guide the neural network to perform autonomous learning.
The embodiment of the application provides a training method of a neural network, which comprises the following steps: inputting task information into a preset learning model to obtain sample data, wherein the task information is used for indicating the characteristics of a sample to be acquired;
The prediction model is processed according to the sample data to obtain a prediction result;
and when the prediction result meets a preset condition, determining that the preset learning model is trained.
Optionally, in an embodiment of the present application, when the prediction result meets a predetermined condition, determining that the preset learning model completes training includes:
and when the difference between the predicted result and the last predicted result is smaller than or equal to a preset difference value, determining that the preset learning model is trained.
Optionally, in an embodiment of the present application, the training method of the neural network further includes:
When the predicted result does not meet the preset condition, inputting the difference between the predicted result and the last predicted result and the task information into the preset learning model to acquire new sample data;
And inputting the new sample data into the prediction model to obtain a new prediction result.
Optionally, in an embodiment of the present application, when the predicted result does not meet the predetermined condition, inputting a difference between the predicted result and a last predicted result and the task information into the preset learning model to obtain new sample data, inputting the new sample data into the prediction model to obtain a new predicted result, and further including:
and when the difference between the predicted result and the last predicted result is larger than a preset difference value, determining that the predicted result does not meet the preset condition.
Optionally, in an embodiment of the present application, the predicting model obtains a predicted result according to the sample data, including:
constructing a knowledge graph of the sample data;
and inputting the knowledge graph into the prediction model to obtain the prediction result.
The embodiment of the application also provides a training system of the neural network, which comprises: the learning module and the prediction module are used for generating a prediction model,
The learning module is used for inputting task information into a preset learning model to obtain sample data;
the prediction module is used for obtaining a prediction result of the sample data by using a prediction model, and determining that the preset learning model is trained when the prediction result meets a preset condition;
Optionally, in an embodiment of the present application, the prediction module is further configured to determine that the preset learning model completes training when a difference between the predicted result and a last predicted result is less than or equal to a preset difference value.
Optionally, in one embodiment of the present application, when the predicted result does not meet the predetermined condition, the prediction module inputs a difference between the predicted result and a last predicted result and the task information into the preset learning model to obtain new sample data;
And inputting the new sample data into the prediction model to obtain a new prediction result.
Optionally, in an embodiment of the present application, the prediction module is further configured to determine that the predicted result does not meet the predetermined condition when a difference between the predicted result and a last predicted result is greater than a preset difference.
Optionally, in an embodiment of the present application, the training system of the neural network further includes a knowledge-graph construction module, where the knowledge-graph construction module is configured to construct a knowledge graph of the sample data;
The prediction module is further configured to input the knowledge graph into the prediction model to obtain the prediction result.
The embodiment of the application provides a training method and a training system for a neural network, wherein the training method comprises the following steps: the training method for the neural network provided by the application can effectively guide the neural network to perform autonomous learning, and has high efficiency and good accuracy.
Drawings
FIG. 1 is a flowchart of a training method of a neural network according to an embodiment of the present application;
FIG. 2 is a flowchart of a training method of a neural network according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the embodiments of the present application, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and the described embodiments are only some embodiments, but not all embodiments of the present application. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present application, shall fall within the scope of protection of the embodiments of the present application.
Embodiments of the present application will be further described with reference to the accompanying drawings.
Embodiment 1,
As shown in fig. 1, fig. 1 is a flowchart of a neural network training method according to an embodiment of the present application, including:
S101, inputting task information into a preset learning model to obtain sample data, wherein the task information is used for indicating the characteristics of a sample to be acquired;
In the embodiment of the application, inputting task information into a preset learning model to obtain sample data comprises the following steps: determining data to be learned according to the description of the training target task by using a preset learning model, for example, the preset learning model can determine detailed data of the target task through tools such as internet/sensor/simulation software and the like according to header data of data related to the training target task contained in task information, and determine the detailed data as sample data of the training target task, and of course, the sample data can also be sample data determined by a manual screening mode, and the embodiment only exemplifies a mode of obtaining the sample number by using the preset learning model and does not represent the application;
In this embodiment, the preset learning model is a deep learning model based on deep learning, and the preset learning model is a neural network capable of learning mapping relations between various data related to target task information and target task content, for example, a header data of a target task is input to the preset learning model, and a batch of sample data and detailed parameters of the sample data related to the target task can be output, so that training speed is effectively improved, and the amount of learned data is reduced.
S102, processing the sample data by using a prediction model to obtain a prediction result;
in one implementation manner of this embodiment, the prediction model is a neural network model based on a graph neural network (Graph Neural Networks, abbreviated as GNN), which can better perform pattern recognition and data mining.
Optionally, in an implementation manner of this embodiment, the sample data may be data obtained directly by using a preset learning model, or may be data information related to the sample data obtained by presetting learning model on related information of a training target task, and then determining the data as the sample data according to a data set obtained by using tools such as internet/sensor/simulation software and the like by using the data information;
and inputting the determined sample data into a prediction model to obtain a prediction result.
Optionally, in an implementation manner of the embodiment of the present application, processing the sample data by using a prediction model to obtain a prediction result further includes:
Constructing a knowledge graph of the sample data;
and inputting the knowledge graph into a prediction model to obtain a prediction result.
Optionally, constructing a knowledge graph of the sample data includes: and determining the data in the sample data to be nodes in the knowledge graph, and constructing the knowledge graph of the sample data according to the knowledge nodes.
The edges of the knowledge graph are relations among data in sample data, each node of the knowledge graph is a directional attribute graph, each node of the knowledge graph comprises a plurality of attributes and attribute values, the knowledge graph adopts character strings which are easy to identify sample data, and the knowledge graph can be easily identified and processed by a computer, so that the prediction efficiency of a prediction model is improved.
And S103, when the predicted result meets the preset condition, determining that the preset learning model is trained.
Optionally, in an embodiment of the present application, when the predicted result meets a predetermined condition, determining that the preset learning model completes training includes: and when the difference between the predicted result and the last predicted result is smaller than or equal to the preset difference value, determining that the preset learning model finishes training.
In one implementation manner of the present embodiment, the preset difference may be a value, and the magnitude of the preset difference may be preset, or may be set according to a target task or other manners, where, of course, the embodiment of the present application only illustrates the preset difference by way of example, and does not represent that the present application is limited thereto;
In the embodiment of the application, the smaller the preset difference value is, the better the preset difference value is, the smaller the difference value between the predicted result and the last predicted result is, which means that the greater the density of the sample data obtained by the preset learning model is, the more consistent the predicted result obtained according to the sample data tends to, and the better the training accuracy of the final preset learning model is.
Optionally, in an embodiment of the present application, the training method of the neural network further includes: when the predicted result does not meet the preset condition, inputting the difference between the predicted result and the last predicted result and the task information of the training into a preset learning model to acquire new sample data;
And inputting the new sample data into the prediction model to obtain a new prediction result.
When the prediction result does not meet the preset condition, the density of the sample data obtained by the preset model is lower, the data are scattered and not comprehensive enough, so that the error between the prediction result obtained by the prediction model according to the current sample data and the actual data is larger, the data density of the current sample data needs to be improved by continuously utilizing the proper sample data, the data after the data density is improved is used as the sample data again, the neural network is continuously trained, and the training result after the neural network obtains the sufficient and comprehensive sample data has good accuracy.
Optionally, in an implementation manner of the embodiment of the present application, acquiring new sample data by using a preset learning model further includes:
Determining new sample data according to the sample data corresponding to the last prediction result and the last prediction result, for example, the preset learning model can determine the data to be learned of the current preset learning model according to the last prediction result, so that the determined data to be learned is added into the last sample data to serve as new sample data; in another implementation manner of this embodiment, the current sample data may also be updated by using tools such as the internet/sensor/simulation software according to the prediction result of the current sample data through the preset learning model to obtain new sample data, and of course, this embodiment only enumerates two implementation manners of determining new sample data according to the sample data corresponding to the previous prediction result and the previous prediction result by using the preset learning model, which does not represent the present application, and the preset learning model may also obtain new sample data in other manners. The method comprises the steps of determining a new implementation mode of sample data through sample data corresponding to a last prediction result and a last prediction result, and enabling a preset learning model to obtain data required to be learned more accurately, so that the total amount of the sample data required to be obtained in a training process of a neural network is reduced, the training accuracy of the neural network is ensured, and meanwhile, the training efficiency of the neural network is improved.
Optionally, in an implementation manner of this embodiment, when the prediction result does not meet the predetermined condition, acquiring new sample data by using a preset learning model, and inputting the prediction result and the new sample data into the prediction model to obtain the new prediction result, including;
When the difference between the predicted result and the last predicted result is larger than a preset difference value, acquiring new sample data by using a preset learning model, and inputting the predicted result and the new sample data into the predicted model to acquire a new predicted result;
In the implementation manner of the embodiment, the preset learning model is determined to acquire new sample data by using the preset difference value, so that the neural network is determined to need to continue training learning, the determination process of determining whether the neural network is trained according to the prediction result can be simplified, and the training efficiency of the neural network is improved. The preset difference may be a value, which may be set manually or may be determined in other manners, which is not limited by the present application.
The embodiment provides a training method of a neural network, which comprises the following steps: inputting task information into a preset learning model to obtain sample data, wherein the task information is used for indicating the characteristics of a sample to be acquired; the method for training the neural network can well guide the neural network to perform autonomous learning, and has high efficiency and good training accuracy.
Embodiment II,
Based on the training method of the neural network described in the first embodiment, this embodiment exemplifies a practical application scenario, for example, a training process of the neural network for predicting antenna performance is described, and the training method of the neural network described in the first embodiment is described.
In real life, when the performance of the antenna is measured, a large number of test points of the antenna to be measured are involved, if each point of the antenna is tested one by one, the workload required in the test process is extremely huge, the test efficiency is low, and the cost is high, however, by utilizing a neural network to obtain the test data of a small number of test points of the antenna to be measured, the whole performance of the antenna is predicted, the test workload can be effectively reduced, the test efficiency is improved, and the test cost is reduced.
In this embodiment, as shown in fig. 2, fig. 2 is a flowchart of a neural network training method provided in an embodiment of the present application, where the method includes the following steps:
S201, inputting simulation test data of the antenna performance into a preset learning model to obtain sample data of the antenna performance.
Optionally, in an implementation manner of the present embodiment regarding obtaining sample data of antenna performance, the preset learning model may give data information such as frequency (f), position (x, y, z) and the like of the antenna to be tested according to simulation data of the antenna performance, and the user may determine the sample data of the antenna performance according to the frequency (f), the position (x, y, z) and the corresponding level value E (f, x, y, z) by arranging the test tool at a position specified by the position (x, y, z) information given by the preset learning model and adjusting the test tool to the preset learning model to give the specified frequency (f) of the antenna.
And processing sample data of the antenna performance by using a prediction model to obtain a prediction result:
In an implementation manner of this embodiment, processing sample data of antenna performance by using a prediction model to obtain a prediction result further includes:
S202, establishing a knowledge graph of sample data according to the sample data of the antenna performance.
Optionally, in an implementation manner of the present embodiment, according to sample data of antenna performance, a knowledge graph of the sample data is established, including;
The frequency (f), the position (x, y, z) and the level value E (f, x, y, z) in the sample data of the antenna performance are used as the attributes of the nodes in the knowledge graph, namely the nodes V= (E, f, x, y, z), and the knowledge graph of the sample data is constructed.
And when the predicted result meets the preset condition, determining that the neural network predicting the antenna performance completes training.
S203, determining a prediction result of the antenna performance according to the knowledge graph of the current sample data by using the prediction model.
In one implementation manner of the present embodiment, when the prediction result of the antenna performance meets a predetermined condition, determining that the neural network predicting the antenna performance completes training includes:
S204, determining a batch of new sample data according to the prediction result determined by the sample data by utilizing a preset learning model.
In one implementation manner of this embodiment, determining a new batch of sample data includes determining, according to a prediction result determined by the current antenna performance sample data, a preset learning model capable of determining a new batch of test points of the antenna performance, obtaining, according to frequency (f) and position (x, y, z) information of the new batch of test points, level values E (f, x, y, z) of the new batch of test points by a test tool, and taking, as the new sample data of the antenna performance, data of the frequency (f), position (x, y, z) and level values E (f, x, y, z) in the previous sample data.
S205, constructing a knowledge graph of the new sample data.
S206, obtaining a new prediction result of the antenna performance according to the knowledge graph of the new sample data by using the prediction model; the new prediction result comprises the rest frequency (f), the position (x, y, z) and the corresponding level value E (f, x, y, z) of the antenna.
S207, comparing the difference between the current predicted result and the previous predicted result;
s208, determining whether the difference meets a predetermined condition.
S209, if a preset condition is met, training is completed; if the predetermined condition is not satisfied, the process goes to S204 until the difference between the predicted result determined by the sample data updated by the preset learning model and the predicted result of the previous sample data satisfies the predetermined condition, and the training of the neural network for determining the predicted antenna performance is completed.
In one implementation manner of this embodiment, when the difference satisfies the predetermined condition, it is determined that the data learned by the neural network predicting the antenna performance has reached a sufficient data density, and the accuracy of the neural network prediction after the training is completed is high, and the current training can be ended.
When the difference does not meet the preset condition, the fact that the current learned sample data of the preset learning model is low in density, the predicted antenna performance and the actual performance have larger errors is indicated, the sample data need to be continuously updated, training is continuously carried out by using the preset learning model and the prediction model, and a training effect with high accuracy is achieved.
Optionally, in an implementation manner of this embodiment, the predetermined condition of the antenna prediction may be set to a preset difference, where the preset difference may be a number, and the number may be manually set, or may be determined by other manners, and setting the predetermined condition to a numerical value may simplify a process of determining a difference between a predicted result of new sample data and a predicted result of previous sample data, thereby improving efficiency of training of the neural network.
Third embodiment,
Based on the training method of the neural network provided in the first embodiment of the present application, in a third embodiment of the present application, an electronic device for training the neural network is provided, as shown in fig. 3, and fig. 3 is a schematic structural diagram of an electronic device provided in the first embodiment of the present application, including: a learning module 301 and a prediction module 302,
The learning module 301 is configured to input task information into a preset learning model to obtain sample data;
the prediction module 302 is configured to obtain a prediction result by using sample data, and determine that the preset learning model completes training when the prediction result meets a predetermined condition;
Optionally, in an implementation manner of this embodiment, the prediction module 302 is further configured to determine that the preset learning model completes training when a difference between the predicted result and the last predicted result is less than or equal to a preset difference value.
Optionally, in an implementation manner of this embodiment, the prediction module 302 is further configured to, when the predicted result does not meet the predetermined condition, input a difference between the predicted result and the previous predicted result and task information into the preset learning model to obtain new sample data, and input the new sample data into the prediction model to obtain a new predicted result.
Optionally, in an implementation manner of this embodiment, the prediction module 302 determines that the prediction result does not meet the predetermined condition when a difference between the prediction result and a last prediction result is greater than the preset difference value.
Optionally, in an embodiment of the present application, the electronic device further includes a knowledge-graph construction module 303, where the knowledge-graph construction module 303 is configured to construct a knowledge graph of the sample data;
the prediction module 302 is further configured to input a knowledge graph of the sample data into the prediction model, to obtain a prediction result of the sample data.
Fourth embodiment,
Based on the description of the above embodiments, this embodiment further provides a storage medium, as shown in fig. 4, fig. 4 is a hardware structure diagram of an electronic device provided by the embodiment of the present application, where hardware of the electronic device further includes:
One or more processors 401;
a storage medium 402, the storage medium 402 configured to store one or more readable programs 412;
when executed by the one or more processors 401, the one or more programs 412 cause the one or more processors to implement the neural network training method as in any of the embodiments described above.
The hardware also includes a communication interface 403 and a communication bus 404;
Wherein the processor 401, the storage medium 402 and the communication interface 403 perform communication with each other through the communication bus 404;
Wherein the processor 401 may be specifically configured to: and acquiring sample data by using a preset learning model, obtaining a prediction result by using a prediction model according to the sample data, and determining that the preset learning model finishes training when the prediction result meets a preset condition.
The neural network trained electronic device of embodiments of the present application exists in a variety of forms including, but not limited to:
A mobile communication device: such devices are characterized by mobile communication capabilities and are primarily aimed at providing voice, data communications. Such terminals include: smart phones (e.g., iPhone), multimedia phones, functional phones, and low-end phones, etc.
Ultra mobile personal computer device: such devices are in the category of personal computers, having computing and processing functions, and generally also having mobile internet access characteristics. Such terminals include: PDA, MID, and UMPC devices, etc., such as iPad.
Portable entertainment device: such devices may display and play multimedia content. The device comprises: audio, video players (e.g., iPod), palm game consoles, electronic books, and smart toys and portable car navigation devices.
Other electronic devices with data interaction functions.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are connected through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (3)

1. A method of training a neural network, comprising:
s201, inputting simulation test data of antenna performance into a preset learning model to obtain sample data of the antenna performance, wherein the simulation test data of the antenna performance is used for indicating characteristics of a sample to be obtained;
The preset learning model can give out frequency (f) and position (x, y, z) data information of the antenna to be tested according to simulation data of the antenna performance, a user can determine sample data of the antenna performance according to the frequency (f), the position (x, y, z) and the corresponding level value E (f, x, y, z) by arranging a testing tool at a position designated by the position (x, y, z) information and adjusting the testing tool to the designated frequency (f) of the antenna given by the preset learning model;
Processing the sample data of the antenna performance by using a prediction model to obtain a prediction result;
s202, establishing a knowledge graph of sample data according to the sample data of the antenna performance;
Constructing a knowledge graph of the sample data by using the frequency (f), the position (x, y, z) and the level value E (f, x, y, z) in the sample data of the antenna performance as the attributes of the nodes in the knowledge graph, namely the nodes V= (E, f, x, y, z);
When the prediction result meets a preset condition, determining that the neural network predicting the antenna performance is trained;
S203, determining a prediction result of the antenna performance according to the knowledge graph of the current sample data by using a prediction model;
when the predicted result of the antenna performance meets a predetermined condition, determining that the neural network predicting the antenna performance completes training, including:
S204, determining a batch of new sample data according to a prediction result determined by the sample data by using a preset learning model;
The determining a new batch of sample data comprises the steps of determining a new batch of test points of antenna performance according to a prediction result determined by the current antenna performance sample data, presetting a learning model, acquiring a level value E (f, x, y, z) of the new batch of test points through a testing tool according to frequency (f) and position (x, y, z) information of the new batch of test points, and taking the data of the frequency (f), the position (x, y, z) and the level value E (f, x, y, z) in the previous sample data as the new sample data of the antenna performance;
s205, constructing a knowledge graph of new sample data;
S206, the prediction model obtains a new prediction result of the antenna performance according to the knowledge graph of the new sample data; wherein the new prediction result comprises the rest frequency (f), the position (x, y, z) and the corresponding level value E (f, x, y, z) of the antenna;
s207, comparing the difference between the current predicted result and the previous predicted result;
s208, determining whether the difference meets a preset condition;
S209, if a preset condition is met, training is completed; if the predetermined condition is not satisfied, the process goes to S204 until the difference between the predicted result determined by the sample data updated by the preset learning model and the predicted result of the previous sample data satisfies the predetermined condition, and the training of the neural network for determining the predicted antenna performance is completed.
2. The method for training a neural network according to claim 1, wherein the determining that the neural network predicting the antenna performance completes training when the prediction result satisfies a predetermined condition comprises:
and when the difference between the predicted result and the last predicted result is smaller than or equal to a preset difference value, determining that the preset learning model is trained.
3. The method of training a neural network of claim 1, wherein the predetermined condition is not satisfied, comprising:
And when the difference between the predicted result and the last predicted result is larger than a preset difference value, determining that the predicted result does not meet the preset condition.
CN202010259862.1A 2020-04-03 2020-04-03 Training method of neural network Active CN111461328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010259862.1A CN111461328B (en) 2020-04-03 2020-04-03 Training method of neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010259862.1A CN111461328B (en) 2020-04-03 2020-04-03 Training method of neural network

Publications (2)

Publication Number Publication Date
CN111461328A CN111461328A (en) 2020-07-28
CN111461328B true CN111461328B (en) 2024-04-30

Family

ID=71681642

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010259862.1A Active CN111461328B (en) 2020-04-03 2020-04-03 Training method of neural network

Country Status (1)

Country Link
CN (1) CN111461328B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515896B (en) * 2021-08-06 2022-08-09 红云红河烟草(集团)有限责任公司 Data missing value filling method for real-time cigarette acquisition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898218A (en) * 2018-05-24 2018-11-27 阿里巴巴集团控股有限公司 A kind of training method of neural network model, device and computer equipment
CN109325508A (en) * 2017-07-31 2019-02-12 阿里巴巴集团控股有限公司 The representation of knowledge, machine learning model training, prediction technique, device and electronic equipment
CN110163201A (en) * 2019-03-01 2019-08-23 腾讯科技(深圳)有限公司 Image measurement method and apparatus, storage medium and electronic device
CN110210654A (en) * 2019-05-20 2019-09-06 南京星火技术有限公司 Product model designing system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325508A (en) * 2017-07-31 2019-02-12 阿里巴巴集团控股有限公司 The representation of knowledge, machine learning model training, prediction technique, device and electronic equipment
CN108898218A (en) * 2018-05-24 2018-11-27 阿里巴巴集团控股有限公司 A kind of training method of neural network model, device and computer equipment
CN110163201A (en) * 2019-03-01 2019-08-23 腾讯科技(深圳)有限公司 Image measurement method and apparatus, storage medium and electronic device
CN110210654A (en) * 2019-05-20 2019-09-06 南京星火技术有限公司 Product model designing system and method

Also Published As

Publication number Publication date
CN111461328A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN109976998B (en) Software defect prediction method and device and electronic equipment
US10558935B2 (en) Weight benefit evaluator for training data
CN112434188B (en) Data integration method, device and storage medium of heterogeneous database
CN111611390B (en) Data processing method and device
CN110069997B (en) Scene classification method and device and electronic equipment
CN111461328B (en) Training method of neural network
CN116342882A (en) Automatic segmentation method, system and equipment for cotton root system image
CN110110017A (en) A kind of interest point data association method, device and server
CN114298326A (en) Model training method and device and model training system
CN110245978A (en) Policy evaluation, policy selection method and device in tactful group
CN114049530A (en) Hybrid precision neural network quantization method, device and equipment
CN116560968A (en) Simulation calculation time prediction method, system and equipment based on machine learning
CN109977925B (en) Expression determination method and device and electronic equipment
CN111612158A (en) Model deployment method, device, equipment and storage medium
CN109829051B (en) Method and device for screening similar sentences of database
CN108416426B (en) Data processing method, device and computer readable storage medium
CN113282535B (en) Quantization processing method and device and quantization processing chip
CN115774854A (en) Text classification method and device, electronic equipment and storage medium
CN111582456B (en) Method, apparatus, device and medium for generating network model information
CN108229572A (en) A kind of parameter optimization method and computing device
CN111144098B (en) Recall method and device for extended question
CN110264333B (en) Risk rule determining method and apparatus
CN115688042A (en) Model fusion method, device, equipment and storage medium
CN113592557A (en) Attribution method and device of advertisement putting result, storage medium and electronic equipment
CN111680170B (en) Physical characteristic prediction method and device of periodic structure and related products

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant