CN117932337A - Method and device for training neural network based on embedded platform - Google Patents
Method and device for training neural network based on embedded platform Download PDFInfo
- Publication number
- CN117932337A CN117932337A CN202410069955.6A CN202410069955A CN117932337A CN 117932337 A CN117932337 A CN 117932337A CN 202410069955 A CN202410069955 A CN 202410069955A CN 117932337 A CN117932337 A CN 117932337A
- Authority
- CN
- China
- Prior art keywords
- neural network
- training
- sample
- target classification
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The application belongs to the field of embedded technology, and discloses a method and a device for training a neural network based on an embedded platform, wherein the method comprises the following steps: acquiring a preset number of initial samples, and respectively taking the initial samples as weighted clustering centers; a receiving step: receiving a new sample, and calculating the distance between the new sample and each initial sample; determining a target classification according to the distance from each initial sample; placing the new sample into the target class; calculating a first matching distance between the new sample and each initial sample in the target classification; updating a weighted clustering center of the target classification based on each first matched distance; calculating the learning rate of target classification, and inputting a pre-training neural network by a weighted clustering center; the new sample is taken as an initial sample in the target classification and returned to the receiving step. The application can realize the real-time training of the neural network on the embedded platform without occupying a large amount of flash space.
Description
Technical Field
The application relates to the field of embedded technology, in particular to a method and a device for training a neural network based on an embedded platform.
Background
At present, training of a neural network is generally performed at a PC end, and after training is finished, a parameter matrix trained at the PC end is referenced to an embedded platform for forward calculation. If training is performed on the embedded platform, the embedded platform is required to have the capacity of storing a large amount of samples, namely, flash large capacity is required, most of flash of the existing embedded platform (single-chip microcomputer platform) is within 32k/64k, and meanwhile, the embedded platform is required to store functional programs used in application occasions, so that sample data of neural network training is difficult to store; resulting in the inability of existing embedded platforms to independently train neural networks.
Disclosure of Invention
The application provides a method and a device for training a neural network based on an embedded platform, which can realize the real-time training of the neural network on the embedded platform without occupying a large amount of flash space.
In a first aspect, an embodiment of the present application provides a method for training a neural network based on an embedded platform, including:
Acquiring a preset number of initial samples, and respectively taking the initial samples as weighted clustering centers;
A receiving step: receiving a new sample, and calculating the distance between the new sample and each initial sample;
determining a target classification according to the distance from each initial sample;
Placing the new sample into the target class;
Calculating a first matching distance between the new sample and each initial sample in the target classification;
Updating a weighted clustering center of the target classification based on each first matched distance;
calculating the learning rate of target classification, and inputting a pre-training neural network by a weighted clustering center;
the new sample is taken as an initial sample in the target classification and returned to the receiving step.
Further, the receiving the new sample, calculating the distance between the new sample and each initial sample, includes:
and calculating the Euclidean distance between the new sample and each initial sample by adopting a two-norm formula.
Further, the determining the target classification according to the distance from each initial sample includes:
and taking the sample classification of the initial sample corresponding to the minimum distance as the target classification.
Further, the updating the weighted clustering center of the target classification based on each first matching distance includes:
Normalizing each first matching distance to obtain a corresponding second matching distance;
And updating the weighted clustering center of the target classification based on each second matched distance.
Further, the updating the weighted clustering center of the target classification based on each second matching distance includes: and taking the sum value of each initial sample in the target classification and the corresponding second matching distance as a weighted clustering center of the target classification.
Further, the pre-training neural network is a BP neural network, and the pre-training neural network is used for calculating the residual capacity of the battery or the charge and discharge residual time.
Further, the initial sample and the new sample are a real-time current value, a real-time voltage value, a current change value and a voltage change value of the battery.
In a second aspect, an embodiment of the present application provides an apparatus for training a neural network based on an embedded platform, including:
The acquisition module is used for acquiring a preset number of initial samples and respectively serving as weighted clustering centers;
the receiving module is used for receiving the new sample and calculating the distance between the new sample and each initial sample;
The determining module is used for determining target classification according to the distance between the target classification and each initial sample;
The classification module is used for placing the new sample into the target classification;
the calculation module is used for calculating a first matching distance between the new sample and each initial sample in the target classification;
The updating module is used for updating the weighted clustering center of the target classification based on each first matched distance;
The training module is used for calculating the learning rate of target classification and inputting a pre-training neural network into the weighted clustering center;
And the circulation module is used for taking the new sample as an initial sample in the target classification and returning the new sample to the receiving module.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the steps of the method for training a neural network based on an embedded platform as in any of the embodiments described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method of training a neural network based on an embedded platform as in any of the embodiments described above.
In summary, compared with the prior art, the technical scheme provided by the embodiment of the application has the following beneficial effects:
According to the method for training the neural network based on the embedded platform, the weighted clustering center is updated according to the received new sample, the updated learning rate is input into the pre-training neural network for training, the functions of receiving the new sample in real time and training the neural network in real time are achieved, the embedded platform can continuously receive data as the new sample, meanwhile, a large amount of flash space is not required to be occupied in a training mode of the weighted clustering center, and real-time training of the neural network is achieved on the embedded platform.
Drawings
Fig. 1 is a flowchart of a method for training a neural network based on an embedded platform according to an embodiment of the present application.
Fig. 2 is a block diagram of an apparatus for training a neural network based on an embedded platform according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, an embodiment of the present application provides a method for training a neural network based on an embedded platform, including:
Step S1, obtaining a preset number of initial samples, and respectively taking the initial samples as weighted clustering centers.
Step S2, a receiving step: a new sample is received and a distance between the new sample and each initial sample is calculated.
Specifically, a two-norm formula may be used to calculate the euclidean distance of the new sample from each of the initial samples. The specific formula is as follows:
Where (x 1,x2,…,xn) is some initial sample in n dimensions, and iix 2 represents x, the two norms of the new sample.
And S3, determining target classification according to the distance between the target classification and each initial sample.
Specifically, the sample classification in which the initial sample corresponding to the minimum distance is located may be regarded as the target classification.
And S4, placing the new sample into the target classification.
First, the preset number, i.e., the number N of samples to be classified, may be 5, 7, 11, etc. After receiving N initial samples, taking the N initial samples as N initial weighted clustering centers, after receiving the (N+1) th sample, calculating the distance between the (N+1) th sample and the original N samples, namely N weighted clustering centers, and classifying the N samples into corresponding sample classifications.
Step S5, calculating a first matching distance between the new sample and each initial sample in the target classification.
Since the weighted cluster center is the same as the sample dimension, a two-norm equation may also be used in calculating the first supporting distance.
And S6, updating the weighted clustering center of the target classification based on each first matched distance.
And S7, calculating the learning rate of target classification, and inputting a pre-training neural network by a weighted clustering center.
Step S8, taking the new sample as an initial sample in the target classification, and returning to the receiving step.
Wherein the learning rate is the ratio of the number of samples in the sample classification corresponding to each weighted cluster center, for example, the learning rate of the classification with the largest sample is set to 0.1, and the learning rate of the other classification with the largest sample is set to 0.05. The learning rate is the amount of adjustment for each step of the pre-trained neural network during training.
In the specific training process, steps S2-S8 are performed in a loop, that is, new samples are continuously received, and the weighted cluster center corresponding to the target classification is updated according to the new samples, so as to train the pre-training neural network in real time.
According to the method for training the neural network based on the embedded platform, the weighted clustering center is updated according to the received new sample, the updated learning rate is input into the pre-training neural network for training, the functions of receiving the new sample in real time and training the neural network in real time are achieved, the embedded platform can continuously receive data as the new sample, meanwhile, a large amount of flash space is not required to be occupied in a training mode of the weighted clustering center, and real-time training of the neural network is achieved on the embedded platform.
In some embodiments, the updating the weighted clustering center of the target classification based on the first matched distances includes:
Step S61, normalizing each first matching distance to obtain a corresponding second matching distance.
Step S62, updating the weighted clustering center of the target classification based on each second matched distance. Specifically, a sum value of each initial sample in the target classification and the corresponding second matching distance can be taken as a weighted clustering center of the target classification.
In a specific implementation process, a softmax process is performed on each first matching distance value, for example, the first matching distance of each sample is D1, D2 and …, DX/(d1+d2+ …) process is performed once, a second matching distance of the xth sample is obtained, and the sum of each sample and the second matching distance is calculated and used as a new weighted clustering center of the target classification.
Meanwhile, when the matching distance is calculated and the weighted clustering center is updated, the number of samples of the target classification is increased by 1 because new samples are added to the target classification.
In some embodiments, the pre-trained neural network is a BP neural network.
The pre-training neural network is used for calculating the residual capacity of the battery or the charge-discharge residual time; the initial sample and the new sample are the real-time current value, the real-time voltage value, the current change value and the voltage change value of the battery.
The current change value is the difference value of the current value at the current moment relative to the current value at the last moment, and the voltage change value is the difference value of the voltage value at the current moment relative to the voltage value at the last moment.
Specifically, the training step of the application is executed by taking the current and voltage related values as samples, so that the neural network capable of calculating the residual capacity of the battery or the charge-discharge residual time can be trained in the embedded platform.
Referring to fig. 2, another embodiment of the present application provides an apparatus for training a neural network based on an embedded platform, including:
The acquiring module 101 is configured to acquire a preset number of initial samples, and respectively serve as weighted clustering centers.
The receiving module 102 is configured to receive the new samples, and calculate distances between the new samples and each initial sample.
A determining module 103, configured to determine a target classification according to the distance from each initial sample.
The classification module 104 is configured to put the new sample into the target classification.
A calculation module 105, configured to calculate a first matching distance between the new sample and each initial sample in the target classification.
An updating module 106, configured to update the weighted cluster center of the target classification based on each first matching distance.
The training module 107 is used for calculating the learning rate of the target classification and inputting the pre-training neural network by the weighted clustering center.
The loop module 108 is configured to take the new sample as an initial sample in the target class and return to the receiving module 102.
Further, the receiving module 102 is configured to calculate the euclidean distance between the new sample and each initial sample by using a two-norm formula.
Further, the determining module 103 is configured to take, as the target classification, a sample classification in which the initial sample corresponding to the minimum distance is located.
For a specific limitation of an apparatus for training a neural network based on an embedded platform provided in this embodiment, reference may be made to the above embodiments of a method for training a neural network based on an embedded platform, which are not described herein. The modules in the device for training the neural network based on the embedded platform can be fully or partially implemented by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Embodiments of the present application provide a computer device that may include a processor, memory, network interface, and database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, causes the processor to perform the steps of a method of training a neural network based on an embedded platform as in any of the embodiments described above.
The working process, working details and technical effects of the computer device provided in this embodiment may be referred to the above embodiments of a method for training a neural network based on an embedded platform, which are not described herein.
An embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method for training a neural network based on an embedded platform as in any of the embodiments above. The computer readable storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, and/or a Memory Stick (Memory Stick), etc., where the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The working process, working details and technical effects of the computer readable storage medium provided in this embodiment can be referred to the above embodiments of a method for training a neural network based on an embedded platform, which are not described herein.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (10)
1. A method for training a neural network based on an embedded platform, comprising:
Acquiring a preset number of initial samples, and respectively taking the initial samples as weighted clustering centers;
a receiving step: receiving a new sample, and calculating the distance between the new sample and each initial sample;
determining a target classification based on the distance from each of the initial samples;
Placing the new sample into the target classification;
Calculating a first matching distance between the new sample and each initial sample in the target classification;
Updating the weighted cluster center of the target classification based on each first matched distance;
Calculating the learning rate of the target classification and inputting the training neural network into the weighted clustering center;
and taking the new sample as an initial sample in the target classification, and returning to the receiving step.
2. The method of training a neural network based on an embedded platform of claim 1, wherein the receiving a new sample, calculating a distance of the new sample from each of the initial samples, comprises:
and calculating the Euclidean distance between the new sample and each initial sample by adopting a two-norm formula.
3. The method of training a neural network based on an embedded platform of claim 1, wherein said determining a target classification based on a distance from each of said initial samples comprises:
and taking the sample classification of the initial sample corresponding to the minimum distance as the target classification.
4. The method of training a neural network based on an embedded platform of claim 1, wherein updating the weighted cluster center of the target class based on each of the first matched distances comprises:
normalizing each first matching distance to obtain a corresponding second matching distance;
updating the weighted cluster center of the target classification based on each of the second matched distances.
5. The method of training a neural network based on an embedded platform of claim 4, wherein said updating the weighted cluster center of the target class based on each of the second matched distances comprises:
And taking the sum value of the addition of each initial sample in the target classification and the corresponding second matching distance as the weighted clustering center of the target classification.
6. The embedded platform based neural network training method of claim 1, wherein the pre-training neural network is a BP neural network, and the pre-training neural network is used for calculating a battery remaining power or a charge-discharge remaining time.
7. The embedded platform training neural network based method of claim 6, wherein the initial sample and the new sample are a real-time current value, a real-time voltage value, a current variation value, and a voltage variation value of a battery.
8. An apparatus for training a neural network based on an embedded platform, comprising:
The acquisition module is used for acquiring a preset number of initial samples and respectively serving as weighted clustering centers;
the receiving module is used for receiving new samples and calculating the distance between the new samples and each initial sample;
A determining module, configured to determine a target classification according to a distance from each of the initial samples;
The classification module is used for placing the new sample into the target classification;
The calculating module is used for calculating a first matching distance between the new sample and each initial sample in the target classification;
the updating module is used for updating the weighted clustering center of the target classification based on each first matched distance;
the training module is used for calculating the learning rate of the target classification and inputting the training neural network into the weighted clustering center;
And the circulation module is used for taking the new sample as an initial sample in the target classification and returning the new sample to the receiving module.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the embedded platform based neural network training method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the embedded platform based neural network training method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410069955.6A CN117932337B (en) | 2024-01-17 | 2024-01-17 | Method and device for training neural network based on embedded platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410069955.6A CN117932337B (en) | 2024-01-17 | 2024-01-17 | Method and device for training neural network based on embedded platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117932337A true CN117932337A (en) | 2024-04-26 |
CN117932337B CN117932337B (en) | 2024-08-16 |
Family
ID=90758545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410069955.6A Active CN117932337B (en) | 2024-01-17 | 2024-01-17 | Method and device for training neural network based on embedded platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117932337B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509984A (en) * | 2018-03-16 | 2018-09-07 | 新智认知数据服务有限公司 | Activation value quantifies training method and device |
CN109002843A (en) * | 2018-06-28 | 2018-12-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
US20200027002A1 (en) * | 2018-07-20 | 2020-01-23 | Google Llc | Category learning neural networks |
CN112529029A (en) * | 2019-09-18 | 2021-03-19 | 华为技术有限公司 | Information processing method, neural network training method, device and storage medium |
WO2021218226A1 (en) * | 2020-04-26 | 2021-11-04 | 华为技术有限公司 | Method for verifying labeled data, method and device for model training |
CN113920397A (en) * | 2021-10-12 | 2022-01-11 | 京东科技信息技术有限公司 | Method and device for training image classification model and method and device for image classification |
CN115983477A (en) * | 2023-01-04 | 2023-04-18 | 湖南大唐先一科技有限公司 | Load prediction method based on K-means clustering and convolutional neural network model |
CN116596095A (en) * | 2023-07-17 | 2023-08-15 | 华能山东发电有限公司众泰电厂 | Training method and device of carbon emission prediction model based on machine learning |
CN116974735A (en) * | 2022-04-22 | 2023-10-31 | 戴尔产品有限公司 | Method, electronic device and computer program product for model training |
US20230350019A1 (en) * | 2022-04-28 | 2023-11-02 | Zadar Labs, Inc | Advanced adaptive clustering technique for portable radars |
-
2024
- 2024-01-17 CN CN202410069955.6A patent/CN117932337B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509984A (en) * | 2018-03-16 | 2018-09-07 | 新智认知数据服务有限公司 | Activation value quantifies training method and device |
CN109002843A (en) * | 2018-06-28 | 2018-12-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
WO2020001196A1 (en) * | 2018-06-28 | 2020-01-02 | Oppo广东移动通信有限公司 | Image processing method, electronic device, and computer readable storage medium |
US20200027002A1 (en) * | 2018-07-20 | 2020-01-23 | Google Llc | Category learning neural networks |
CN112529029A (en) * | 2019-09-18 | 2021-03-19 | 华为技术有限公司 | Information processing method, neural network training method, device and storage medium |
WO2021218226A1 (en) * | 2020-04-26 | 2021-11-04 | 华为技术有限公司 | Method for verifying labeled data, method and device for model training |
CN113920397A (en) * | 2021-10-12 | 2022-01-11 | 京东科技信息技术有限公司 | Method and device for training image classification model and method and device for image classification |
CN116974735A (en) * | 2022-04-22 | 2023-10-31 | 戴尔产品有限公司 | Method, electronic device and computer program product for model training |
US20230350019A1 (en) * | 2022-04-28 | 2023-11-02 | Zadar Labs, Inc | Advanced adaptive clustering technique for portable radars |
CN115983477A (en) * | 2023-01-04 | 2023-04-18 | 湖南大唐先一科技有限公司 | Load prediction method based on K-means clustering and convolutional neural network model |
CN116596095A (en) * | 2023-07-17 | 2023-08-15 | 华能山东发电有限公司众泰电厂 | Training method and device of carbon emission prediction model based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN117932337B (en) | 2024-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846340B (en) | Face recognition method and device, classification model training method and device, storage medium and computer equipment | |
CN110060144B (en) | Method for training credit model, method, device, equipment and medium for evaluating credit | |
US20210295162A1 (en) | Neural network model training method and apparatus, computer device, and storage medium | |
US20230196744A1 (en) | Vehicle recognition method and apparatus, device, and storage medium | |
CN111950656B (en) | Image recognition model generation method and device, computer equipment and storage medium | |
CN109783604B (en) | Information extraction method and device based on small amount of samples and computer equipment | |
CN109256216A (en) | Medical data processing method, device, computer equipment and storage medium | |
CN112699941B (en) | Plant disease severity image classification method, device, equipment and storage medium | |
WO2021159748A1 (en) | Model compression method and apparatus, computer device, and storage medium | |
CN110659667A (en) | Picture classification model training method and system and computer equipment | |
CN112183750A (en) | Neural network model training method and device, computer equipment and storage medium | |
CN117175664B (en) | Energy storage charging equipment output power self-adaptive adjusting system based on use scene | |
CN109754135B (en) | Credit behavior data processing method, apparatus, storage medium and computer device | |
CN112686320A (en) | Image classification method and device, computer equipment and storage medium | |
CN117932337B (en) | Method and device for training neural network based on embedded platform | |
CN113240090B (en) | Image processing model generation method, image processing device and electronic equipment | |
US20240346315A1 (en) | Model cooperative training method and related apparatus | |
CN112557905B (en) | Battery pack, data processing method thereof, computer device, medium, and vehicle | |
CN118095487A (en) | Fine tuning method and device for pre-training model | |
CN117130595A (en) | Code development method, device, computer equipment and storage medium | |
CN117056721A (en) | Model parameter adjusting method and device, model prediction method, device and medium | |
CN116824572A (en) | Small sample point cloud object recognition method, system and medium based on global and component matching | |
CN111507188B (en) | Face recognition model training method, device, computer equipment and storage medium | |
CN113239171A (en) | Method and device for updating conversation management system, computer equipment and storage medium | |
CN113220858B (en) | Dialogue system updating method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |