CN109767001A - Construction method, device and the mobile terminal of neural network model - Google Patents
Construction method, device and the mobile terminal of neural network model Download PDFInfo
- Publication number
- CN109767001A CN109767001A CN201910010775.XA CN201910010775A CN109767001A CN 109767001 A CN109767001 A CN 109767001A CN 201910010775 A CN201910010775 A CN 201910010775A CN 109767001 A CN109767001 A CN 109767001A
- Authority
- CN
- China
- Prior art keywords
- neural network
- network model
- weight
- constructing
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003062 neural network model Methods 0.000 title claims abstract description 82
- 238000010276 construction Methods 0.000 title claims abstract description 6
- 238000013528 artificial neural network Methods 0.000 claims abstract description 56
- 238000013135 deep learning Methods 0.000 claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000000034 method Methods 0.000 claims description 40
- 230000009467 reduction Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 6
- 238000004883 computer application Methods 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
Present invention discloses a kind of construction method of neural network model, device and mobile terminals, belong to computer application technology.The deep learning platform includes: the training for constructing neural network and carrying out deep learning to the sample image of acquisition, and the neural network includes multiple network structures and its corresponding weight parameter;The cutting of network structure is carried out to the neural network according to the weight parameter, make to finally obtain neural network model while keeping higher accuracy rate, calculating time when deep learning can be greatly reduced, computation rate when carrying out deep learning using the neural network model is improved, realizes and neural network model be can be applied into mobile terminal.
Description
Technical Field
The invention relates to the technical field of computer application, in particular to a method and a device for constructing a neural network model and a mobile terminal.
Background
Deep learning has become well known as a revolutionary in the field of machine learning, particularly in the field of computer vision. Similar to deep learning, which rolls traditional models in the field of image classification, deep learning models are now the best method in the field of target detection. However, deep learning depends on a neural network, and when the neural network is used for calculation, too much computer resources are consumed, so that the deep learning must be supported by high-performance and powerful hardware, such as a high-end graphics card, which causes the deep learning to be limited in practical application.
At present, in order to solve the above problems, a cloud service mode is generally adopted in deep learning, that is, a training model and a usage model both run on a high-performance server with a high-end video card, a client uploads a picture to be processed to the server, and the server returns a result to the client after processing is completed, so that target detection of deep learning is completed. However, the deep learning application based on the cloud service must depend on the network, so that the current deep learning mobile application is difficult to popularize and apply.
Disclosure of Invention
The invention provides a method and a device for constructing a neural network model and a mobile terminal, and aims to solve the technical problem that deep learning in the related art is seriously dependent on a network.
In a first aspect, a method for constructing a neural network model is provided, which includes:
constructing a neural network to carry out deep learning training on the collected sample image, wherein the neural network comprises a plurality of network structures and weight parameters corresponding to the network structures;
and cutting the network structure of the neural network according to the weight parameters to obtain a neural network model.
Optionally, the step of cutting the network structure of the neural network according to the weight parameters to obtain a neural network model includes:
comparing the weight parameter corresponding to each network structure with a preset weight parameter threshold value to obtain a small weight network structure;
and deleting the small-weight network structure from the neural network to obtain a neural network model.
Optionally, after the step of performing network structure clipping on the neural network according to the weight parameter to obtain a neural network model, the method further includes:
and reducing the precision of each weight parameter in the neural network model.
Optionally, after the step of performing network structure clipping on the neural network according to the weight parameter to obtain a neural network model, the method further includes:
constructing a check set by using the sample image;
inputting the check set into the neural network model to obtain corresponding accuracy;
and adjusting the weight parameters of the neural network model according to the accuracy.
In a second aspect, an apparatus for constructing a neural network model is provided, including:
the training module is used for constructing a neural network to carry out deep learning training on the acquired sample image, and the neural network comprises a plurality of network structures and weight parameters corresponding to the network structures;
and the cutting module is used for cutting the network structure of the neural network according to the weight parameters to obtain a neural network model.
Optionally, the cutting module includes:
the comparison unit is used for comparing the weight parameters corresponding to the network structures with preset weight parameter thresholds to obtain small-weight network structures;
and the cutting unit is used for deleting the small-weight network structure from the neural network to obtain a neural network model.
Optionally, the apparatus further comprises:
and the precision reduction module is used for reducing the precision of each weight parameter in the neural network model.
Optionally, the apparatus further comprises:
the check set construction module is used for constructing a check set by adopting the sample image;
the accuracy rate obtaining module is used for inputting the check set into the neural network model to obtain corresponding accuracy rate;
and the weight parameter adjusting module is used for adjusting the weight parameters of the neural network model according to the accuracy.
In a third aspect, a mobile terminal is provided, where the mobile terminal includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
In a fourth aspect, a computer readable storage medium is provided for storing a program, characterized in that the program, when executed, causes a mobile terminal to perform the method according to the first aspect.
The technical scheme provided by the embodiment of the invention can obtain the following beneficial effects:
when a neural network model is constructed, a neural network is constructed to carry out deep learning training on an acquired sample image, and then the neural network is cut according to weight parameters of the neural network, so that the final obtained neural network model can greatly reduce the calculation time during deep learning while keeping higher accuracy, the calculation rate during deep learning by adopting the neural network model is improved, and the neural network model can be applied to a mobile terminal.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of constructing a neural network model in accordance with an exemplary embodiment.
Fig. 2 is a flowchart illustrating a specific implementation of step S120 in the deep learning identification method according to the corresponding embodiment in fig. 1.
FIG. 3 is a schematic diagram illustrating a neural network, according to an example embodiment.
Fig. 4 is a block diagram of another deep learning platform according to the corresponding embodiment of fig. 1.
FIG. 5 is a block diagram illustrating an apparatus for constructing a neural network model in accordance with an exemplary embodiment.
Fig. 6 is a block diagram of a specific implementation of the clipping module 120 in the neural network model building apparatus according to the corresponding embodiment shown in fig. 5.
Fig. 7 is a block diagram of a specific implementation of another deep learning identification apparatus according to the corresponding embodiment in fig. 5.
Fig. 8 is a block diagram illustrating a mobile terminal according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of systems and methods consistent with certain aspects of the invention, as set forth in the following claims.
FIG. 1 is a flow chart illustrating a method of constructing a neural network model in accordance with an exemplary embodiment. The method for constructing the neural network model can be used for mobile terminals such as smart phones, smart homes and computers. As shown in fig. 1, the method for constructing the neural network model may include steps S110 and S120.
And step S110, constructing a neural network to carry out deep learning training on the collected sample image.
And step S120, cutting the network structure of the neural network according to the weight parameters to obtain a neural network model.
Before the deep learning identification, a large amount of sample data needs to be acquired in advance, and the training of deep learning on the sample data is performed through the constructed neural network.
When the neural network is adopted to carry out deep learning training on the sample data, the constructed neural network can be of various types. For example, a constructed convolutional neural network may be employed.
Convolutional neural networks were proposed by LeCun et al in 1998 for text recognition, and they are called LeNet-5. The convolution operation is based on the definition of a two-dimensional structure of the image, which defines the local perceptual domain where each underlying feature is only associated with a subset of the input, e.g., topological neighborhood. Topological local constraints within convolutional layers make the weight matrix very sparse, so two layers connected by convolutional operations are only locally connected. Calculating such a matrix multiplication is more convenient and efficient than calculating a dense matrix multiplication, and in addition, a smaller number of free parameters would make statistical calculations more beneficial. In an image with a two-dimensional topology, the same input pattern appears at different positions, and the similar values are more likely to depend more strongly, which is very important for the data model. Computing the same local feature may be at any translation position throughout the graph, so we scan through the graph with such a local feature operator. This is a convolution and transforms the input map into a feature map. This scan can be seen as extracting the same features at different locations, which are weight-shared, more similar to a biological neural network. By the design, not only the complexity of the model can be reduced, but also the number of network weights can be greatly reduced. The convolutional neural network reduces the number of parameters to be learned by using a weight sharing mode, and greatly improves the training speed and accuracy compared with the common forward BP algorithm (Error Back propagation). The convolutional neural network is used as a deep learning algorithm, and the overhead of preprocessing of data can be minimized.
The convolutional neural network comprises a convolutional layer, a pooling layer and a full-link layer. Convolutional neural networks require a large number of labeled samples for training and also require sample enhancement during the training process. Moreover, due to the existence of a convolution structure and the huge data volume, the training of a convolution neural network requires intensive operation amount, so most of the deep convolution networks are trained by the GPU.
Convolutional neural networks generally use convolutional and aggregate operations as the base operations, but do not require an unsupervised layer-by-layer pre-training strategy. In the whole training process, the effect of back propagation is very outstanding, and in addition, the training speed can be improved and the final accuracy can be improved through a proper excitation function.
And (3) constructing a convolutional neural network, randomly initializing a convolutional kernel weight value in the neural network to start training the model, and storing the network structure and corresponding weight parameters after the accuracy reaches an expected threshold value.
A program is written to test the neural network. The program is run in a computer with strong computing power, if the operation effect of the neural network is not as good as the expected effect, the previous steps are repeated, such as increasing the data volume of the data set, modifying the network structure, modifying the hyper-parameters of the neural network, increasing the depth of the network structure, changing the algorithm model and the like.
The neural network comprises a plurality of network nodes, and the network structure is the relationship among different network nodes.
Thus, the neural network includes a plurality of network structures and their corresponding weight parameters.
Alternatively, as shown in fig. 2, step S120 may include step S121 and step S122.
Step S121, comparing the weight parameter corresponding to each network structure with a preset weight parameter threshold to obtain a small-weight network structure.
And S122, deleting the small-weight network structure from the neural network to obtain a neural network model.
And when the network structure is cut, deleting the network structure of which the weight parameter is smaller than a preset weight parameter threshold value in the neural network to obtain a simplified neural network model.
As shown in fig. 3, fig. 3(a) is a network structure before the network structure is cut, and the network structure includes 2 input nodes, 4 hidden nodes and one output node. After the weight parameter threshold is set, the network structure (i.e., the connection relationship between the network nodes) in the neural network whose weight parameter is smaller than the weight parameter threshold can be deleted according to the preset weight parameter threshold, because the weight parameter is too small, the stability and accuracy of the whole neural network are not greatly affected. Fig. 3(b) is the neural network model obtained after the clipping, the solid line indicates that the connection relationship between the two network nodes is continuously maintained, the weight parameter between the two network nodes needs to be calculated, and the dotted line indicates that the connection relationship between the two network nodes and the corresponding weight parameter are abandoned.
Optionally, as shown in fig. 4, after step S120, the method for constructing a neural network model may further include step S210 and step S220.
And step S210, constructing a check set by adopting the sample image.
And step S220, inputting the check set into a neural network model to obtain corresponding accuracy.
And step S230, adjusting weight parameters of the neural network model according to the accuracy.
The weight parameters are initially initialized randomly or derived based on a neural network model trained on other data before. After training is started, a check set is constructed by adopting the sample image, after a neural network model is obtained by training each time, current weight parameters are checked by adopting check set data and fed back until a weight parameter with better accuracy is obtained. Examples are as follows:
given a set of data (x, y):
training set: (x1, y1), (x2, y2), (x3, y 3);
and (4) checking the set: (x11, y 11).
It is necessary to find the relationship between y and x, and the network initialization may be y 10x +8 or other more complex relationships, and when the error obtained by using the check set test is within the allowable range, the corresponding weight parameter is obtained (until x11 is input, data similar to y11 may be obtained, for example, the error is less than 0.0001, and the threshold of the error may be adaptively adjusted according to specific scenarios).
Optionally, after the network structure of the neural network is cut according to the weight parameters to obtain the neural network model, the precision of each weight parameter in the neural network model can be reduced, so as to improve the efficiency of subsequent deep learning. The method for reducing the precision of each weight parameter in the neural network model mainly comprises floating point number reshaping, floating point number reduction, rounding and the like. For example, a weight parameter of 0.6124002 translates directly to a number of 0.6; 0.9860200223 directly keeps the two digits as 0.98,0.5683 directly rounds to 0.6, and by reducing the precision of each weight parameter in the neural network model, the calculation rate of deep learning by adopting the neural network model can be greatly improved, the time cost is greatly reduced, and the application of the neural network model to the mobile terminal is facilitated.
It should be noted that, when the neural network model is obtained by cutting the network structure of the neural network according to the weight parameters, the accuracy and the efficiency are balanced, and when the accuracy of the neural network model is not high enough as detected by the check set, the calculation accuracy of the neural network model is improved by deleting fewer network structures by properly adjusting the preset weight parameter threshold.
By using the method, when the neural network model is constructed, the collected sample image is subjected to deep learning training by constructing the neural network, and then the network structure of the neural network is cut according to the weight parameters of the neural network, so that the finally obtained neural network model can greatly reduce the calculation time during deep learning, improve the calculation rate during deep learning by adopting the neural network model, and realize that the neural network model can be applied to the mobile terminal while keeping higher accuracy.
The following are embodiments of the disclosed apparatus, which may be used to implement embodiments of the method for constructing the neural network model described above. For details not disclosed in the embodiments of the disclosed apparatus, please refer to the embodiments of the method for constructing the neural network model of the present disclosure.
FIG. 5 is a block diagram illustrating an apparatus for constructing a neural network model, according to an exemplary embodiment, including, but not limited to: a training module 110 and a cropping module 120.
The training module 110 is configured to construct a neural network for deep learning training of the acquired sample image, where the neural network includes a plurality of network structures and weight parameters corresponding to the network structures;
and the cutting module 120 is configured to cut the network structure of the neural network according to the weight parameter, so as to obtain a neural network model.
The implementation process of the functions and actions of each module in the device is specifically described in the implementation process of the corresponding step in the construction method of the neural network model, and is not described herein again.
Optionally, as shown in fig. 6, in the apparatus for constructing a neural network model shown in fig. 5 according to the embodiment, the cropping module 120 includes but is not limited to: a comparison unit 121 and a clipping unit 122.
A comparison unit 121, configured to compare the weight parameter corresponding to each network structure with a preset weight parameter threshold, so as to obtain a small-weight network structure;
and the clipping unit 122 is configured to delete the small-weight network structure from the neural network to obtain the neural network model.
Optionally, the apparatus for constructing a neural network model shown in fig. 5 in the corresponding embodiment further includes but is not limited to: and a precision reduction module.
And the precision reduction module is used for reducing the precision of each weight parameter in the neural network model.
Optionally, as shown in fig. 7, the apparatus for constructing a neural network model shown in fig. 5 according to the embodiment further includes, but is not limited to: a check set constructing module 210, an accuracy obtaining module 220 and a weight parameter adjusting module 230.
A check set constructing module 210, configured to construct a check set by using the sample image;
an accuracy obtaining module 220, configured to input the check set into the neural network model, and obtain a corresponding accuracy;
and a weight parameter adjusting module 230, configured to adjust the weight parameter of the neural network model according to the accuracy.
Fig. 8 is a block diagram illustrating a mobile terminal 100 according to an example embodiment. Referring to fig. 8, the mobile terminal 100 may include one or more of the following components: a processing component 101, a memory 102, a power component 103, a multimedia component 104, an audio component 105, a sensor component 107 and a communication component 108. The above components are not all necessary, and the mobile terminal 100 may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 101 generally controls overall operations of the mobile terminal 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 101 may include one or more processors 109 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 101 may include one or more modules that facilitate interaction between the processing component 101 and other components. For example, the processing component 101 may include a multimedia module to facilitate interaction between the multimedia component 104 and the processing component 101.
The memory 102 is configured to store various types of data to support operation at the mobile terminal 100. Examples of such data include instructions for any application or method operating on the mobile terminal 100. The Memory 102 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as an SRAM (Static random access Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), a PROM (Programmable Read-Only Memory), a ROM (Read-Only Memory), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk. Also stored in memory 102 are one or more modules configured to be executed by the one or more processors 109 to perform all or a portion of the steps of any of the illustrated methods described above.
The power supply component 103 provides power to the various components of the mobile terminal 100. The power components 103 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the mobile terminal 100.
The multimedia component 104 includes a screen that provides an output interface between the mobile terminal 100 and the user. In some embodiments, the screen may include an LCD (Liquid Crystal Display) and a TP (touch panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The image capture component 105 is configured to capture images or video. For example, the image capture component 105 may include a camera configured to capture external images when the mobile terminal 100 is in an operational mode. The acquired images may further be stored in the memory 102 or transmitted via the communication component 108. In some embodiments, the image acquisition assembly 105 further comprises a scanner or the like.
The sensor component 107 includes one or more sensors for providing various aspects of state assessment for the mobile terminal 100. For example, the sensor assembly 107 may detect an open/close state of the mobile terminal 100, a relative positioning of the components, a change in coordinates of the mobile terminal 100 or a component of the mobile terminal 100, and a change in temperature of the mobile terminal 100. In some embodiments, the sensor assembly 107 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 108 is configured to facilitate communications between the mobile terminal 100 and other devices in a wired or wireless manner. The mobile terminal 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity), 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 108 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 108 further includes a Near Field Communication (NFC) module to facilitate short-range Communication. For example, the NFC module may be implemented based on an RFID (Radio Frequency Identification) technology, an IrDA (Infrared data association) technology, an UWB (Ultra-Wideband) technology, a BT (Bluetooth) technology, and other technologies.
In an exemplary embodiment, the mobile terminal 100 may be implemented by one or more ASICs (Application specific integrated circuits), DSPs (Digital Signal processors), PLDs (Programmable Logic devices), FPGAs (Field-Programmable gate arrays), controllers, microcontrollers, microprocessors or other electronic components for performing the above-described methods.
The specific manner in which the processor in the mobile terminal in this embodiment performs operations has been described in detail in the embodiment related to the deep learning identification method, and will not be elaborated here.
Optionally, the present invention further provides a mobile terminal, which executes all or part of the steps of any one of the deep learning identification methods described above. The mobile terminal includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the above exemplary embodiments.
The specific manner in which the processor in the mobile terminal in this embodiment performs operations has been described in detail in the embodiment related to the deep learning identification method, and will not be elaborated here.
In an exemplary embodiment, a storage medium is also provided that is a computer-readable storage medium, such as may be transitory and non-transitory computer-readable storage media, including instructions. The storage medium includes, for example, the memory 102 of instructions executable by the processor 109 of the mobile terminal 100 to perform the deep learning identification method described above.
It is to be understood that the invention is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be effected therein by one skilled in the art without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (10)
1. A method for constructing a neural network model, the method comprising:
constructing a neural network to carry out deep learning training on the collected sample image, wherein the neural network comprises a plurality of network structures and weight parameters corresponding to the network structures;
and cutting the network structure of the neural network according to the weight parameters to obtain a neural network model.
2. The method of claim 1, wherein the step of performing network structure clipping on the neural network according to the weight parameters to obtain a neural network model comprises:
comparing the weight parameter corresponding to each network structure with a preset weight parameter threshold value to obtain a small weight network structure;
and deleting the small-weight network structure from the neural network to obtain a neural network model.
3. The method of claim 1, wherein after the step of tailoring the network structure of the neural network according to the weight parameters to obtain a neural network model, the method further comprises:
and reducing the precision of each weight parameter in the neural network model.
4. The method of claim 1, wherein after the step of tailoring the network structure of the neural network according to the weight parameters to obtain a neural network model, the method further comprises:
constructing a check set by using the sample image;
inputting the check set into the neural network model to obtain corresponding accuracy;
and adjusting the weight parameters of the neural network model according to the accuracy.
5. An apparatus for constructing a neural network model, the apparatus comprising:
the training module is used for constructing a neural network to carry out deep learning training on the acquired sample image, and the neural network comprises a plurality of network structures and weight parameters corresponding to the network structures;
and the cutting module is used for cutting the network structure of the neural network according to the weight parameters to obtain a neural network model.
6. The apparatus of claim 5, wherein the cropping module comprises:
the comparison unit is used for comparing the weight parameters corresponding to the network structures with preset weight parameter thresholds to obtain small-weight network structures;
and the cutting unit is used for deleting the small-weight network structure from the neural network to obtain a neural network model.
7. The apparatus of claim 5, further comprising:
and the precision reduction module is used for reducing the precision of each weight parameter in the neural network model.
8. The apparatus of claim 5, further comprising:
the check set construction module is used for constructing a check set by adopting the sample image;
the accuracy rate obtaining module is used for inputting the check set into the neural network model to obtain corresponding accuracy rate;
and the weight parameter adjusting module is used for adjusting the weight parameters of the neural network model according to the accuracy.
9. A mobile terminal, characterized in that the mobile terminal comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A computer-readable storage medium storing a program, characterized in that the program, when executed, causes a mobile terminal to perform the method according to any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910010775.XA CN109767001A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the mobile terminal of neural network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910010775.XA CN109767001A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the mobile terminal of neural network model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109767001A true CN109767001A (en) | 2019-05-17 |
Family
ID=66452664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910010775.XA Pending CN109767001A (en) | 2019-01-07 | 2019-01-07 | Construction method, device and the mobile terminal of neural network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109767001A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782031A (en) * | 2019-09-27 | 2020-02-11 | 北京计算机技术及应用研究所 | Multi-frame convolutional neural network model structure visualization and network reconstruction method |
CN111967570A (en) * | 2019-07-01 | 2020-11-20 | 嘉兴砥脊科技有限公司 | Implementation method, device and machine equipment of mysterious neural network system |
CN113139650A (en) * | 2020-01-20 | 2021-07-20 | 阿里巴巴集团控股有限公司 | Tuning method and computing device of deep learning model |
CN113408632A (en) * | 2021-06-28 | 2021-09-17 | 北京百度网讯科技有限公司 | Method and device for improving image classification accuracy, electronic equipment and storage medium |
-
2019
- 2019-01-07 CN CN201910010775.XA patent/CN109767001A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967570A (en) * | 2019-07-01 | 2020-11-20 | 嘉兴砥脊科技有限公司 | Implementation method, device and machine equipment of mysterious neural network system |
CN111967570B (en) * | 2019-07-01 | 2024-04-05 | 北京砥脊科技有限公司 | Implementation method, device and machine equipment of visual neural network system |
CN110782031A (en) * | 2019-09-27 | 2020-02-11 | 北京计算机技术及应用研究所 | Multi-frame convolutional neural network model structure visualization and network reconstruction method |
CN113139650A (en) * | 2020-01-20 | 2021-07-20 | 阿里巴巴集团控股有限公司 | Tuning method and computing device of deep learning model |
CN113408632A (en) * | 2021-06-28 | 2021-09-17 | 北京百度网讯科技有限公司 | Method and device for improving image classification accuracy, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368893B (en) | Image recognition method, device, electronic equipment and storage medium | |
CN109767001A (en) | Construction method, device and the mobile terminal of neural network model | |
CN106874906B (en) | Image binarization method and device and terminal | |
CN108664946A (en) | Stream of people's characteristic-acquisition method based on image and device | |
CN114402356A (en) | Network model training method, image processing method and device and electronic equipment | |
CN111192312B (en) | Depth image acquisition method, device, equipment and medium based on deep learning | |
CN114757837A (en) | Target model rendering method, device and storage medium | |
CN113284142A (en) | Image detection method, image detection device, computer-readable storage medium and computer equipment | |
CN109993026B (en) | Training method and device for relative recognition network model | |
CN114792355A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
JP7459452B2 (en) | Neural network model-based depth estimation | |
CN114677350B (en) | Connection point extraction method, device, computer equipment and storage medium | |
CN110489955B (en) | Image processing, device, computing device and medium applied to electronic equipment | |
CN113052143A (en) | Handwritten digit generation method and device | |
CN115188000A (en) | Text recognition method and device based on OCR (optical character recognition), storage medium and electronic equipment | |
CN117693754A (en) | Training masked automatic encoders for image restoration | |
JP2024105206A (en) | Ethics-Based Multimodal User Submission Monitoring | |
CN111127481A (en) | Image identification method and device based on TOF image communication area | |
US9344733B2 (en) | Feature-based cloud computing architecture for physics engine | |
CN116342940A (en) | Image approval method, device, medium and equipment | |
CN115496916A (en) | Training method of image recognition model, image recognition method and related device | |
CN114862720A (en) | Canvas restoration method and device, electronic equipment and computer readable medium | |
CN116152586A (en) | Model training method and device, electronic equipment and storage medium | |
CN113344200A (en) | Method for training separable convolutional network, road side equipment and cloud control platform | |
CN112036487A (en) | Image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190517 |
|
RJ01 | Rejection of invention patent application after publication |