CN105550749A - Method for constructing convolution neural network in novel network topological structure - Google Patents

Method for constructing convolution neural network in novel network topological structure Download PDF

Info

Publication number
CN105550749A
CN105550749A CN201510908355.5A CN201510908355A CN105550749A CN 105550749 A CN105550749 A CN 105550749A CN 201510908355 A CN201510908355 A CN 201510908355A CN 105550749 A CN105550749 A CN 105550749A
Authority
CN
China
Prior art keywords
algorithm
convolutional neural
neural networks
neural network
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510908355.5A
Other languages
Chinese (zh)
Inventor
游萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201510908355.5A priority Critical patent/CN105550749A/en
Publication of CN105550749A publication Critical patent/CN105550749A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the construction of a convolution neural network, and aims at solving problems that a conventional neural network algorithm exerts higher requirements for the processing speed of hardware and cannot be applied in electric household appliances. The method comprises the following steps: determining an interconnection structure among nerve cell nodes, and determining the mutual relation between numerical calculation feedback and forwarding calculation transfer among the nerve cell nodes; building a multilayer perceptron model through employing a reverse propagation neural network, wherein the structure of the reverse propagation neural network comprises a forwarding propagation algorithm; operating a reverse propagation algorithm and the forwarding propagation algorithm at the same time, and carrying out training and calculation of the convolution neural network. The method is suitable for mode recognition of household electric appliances.

Description

A kind of building method of convolutional neural networks of new network topological structure
Technical field
The present invention relates to the structure of convolutional neural networks, particularly a kind of building method of convolutional neural networks of new network topological structure.
Background technology
Convolutional neural networks is computer vision and pattern-recognition important field of research, and it refers to that computing machine is copied biological brain thinking to inspire and carried out the information handling system of the similar mankind to special object.It is widely used, and object detection and identification technology is the important component part in present information treatment technology fast and accurately.Because quantity of information increases in recent years sharp, we also exigence have suitable object detection and recognition technology can allow people from a large amount of information, find out information required for oneself.Image retrieval and Text region all belong to this classification, and the detection and indentification system of word is then the pacing items of carrying out information retrieval.Detection and indentification technology is computer vision and field of human-computer interaction important component part.
Convolutional neural networks is a kind of algorithm model being widely used in the field such as pattern-recognition and computer vision recently, there is training parameter few, the feature that manual intervention is little and adaptability is powerful, but because training speed is slow, and algorithm computation complexity, complexity etc. safeguarded by software configuration, there is certain difficulty or defect at different aspect.And much different structural designs is there is in the basic engineering of convolutional neural networks structure, also there is great change in training sample set, adds that the aspects such as complicated Neural Network Based Nonlinear information processing capability exist technological difficulties.Under the condition that the embedded platforms such as appliance system limit by storage space resource, also can not use the powerful CPU computational resources such as similar PC or server.So be necessary that the algorithm studied for convolutional neural networks fast processing data does necessary improvement, to increase the adaptability of algorithm, and then application programs is had higher requirement.
Summary of the invention
The object of the invention is to require high to the processing speed of hardware in order to solve existing neural network algorithm in training and computation process, the problem of sample training and complicated applications cannot be carried out in appliance system.
The technical scheme that the present invention solves its technical matters is: a kind of building method of convolutional neural networks of new network topological structure, is characterized in that, comprise the steps:
Determine the interconnect architecture between neuron node, determine the mutual relationship that numerical evaluation is fed back and forward calculation is transmitted between neuron node according to interconnect architecture;
Use reverse transmittance nerve network to create multiple perceptron model, in described reverse transmittance nerve network structure, comprise propagated forward algorithm;
Run back-propagation algorithm and propagated forward algorithm simultaneously, carry out training and the calculating of convolutional neural networks.
Particularly, the concrete grammar that back-propagation algorithm carries out interative computation comprises following repetitive process: from lower one deck and calculate weight change to later layer, and then to calculate in the output error of front one deck.
Particularly, the partial derivative of the reverse propagated error produced in back-propagation algorithm operational process equals later layer network real output value and deducts later layer network objectives output valve.
Preferably, when running back-propagation algorithm and propagated forward algorithm at the same time, multinuclear process parallel organization is used to carry out multi-threading parallel process.
Particularly, when carrying out multi-threading parallel process, by the back-propagation phase of neural network, have a contextual processing to return first thread, the weighted value converted in handoff procedure calculates and synchronously carries out in each thread.
The invention has the beneficial effects as follows: by method of the present invention, can when not taking a large amount of computational resource, reducing the training used time of convolutional neural networks to the full extent, can train for more huge training sample set in follow-up experiment and simulation when reducing consume computing time.And when conformability significantly promotes, also can promote that pattern-recognition more widely and computer vision are for the scope of detection and Identification object, this basic engineering technology based on the convolutional neural networks structure improving network topology structure promotes the application model of intelligent appliance product, improve the classification of household electrical appliances in visual interactive and Detection results, in intelligent and generalization, to obtain better Consumer's Experience in actual product use procedure.
Embodiment
Below technical scheme of the present invention is described in detail.
The present invention solves during existing neural network algorithm calculates to require height to the process express delivery of hardware, and cannot carry out the problem applied in household electrical appliances, provide a kind of building method of convolutional neural networks of new network topological structure, the method comprises the steps:
Determine the interconnect architecture between neuron node, determine the mutual relationship that numerical evaluation is fed back and forward calculation is transmitted between neuron node according to interconnect architecture;
Use reverse transmittance nerve network to create multiple perceptron model, in described reverse transmittance nerve network structure, comprise propagated forward algorithm;
Run back-propagation algorithm and propagated forward algorithm simultaneously, carry out training and the calculating of convolutional neural networks.
The concrete grammar that back-propagation algorithm carries out interative computation comprises following repetitive process: from lower one deck and calculate weight change to later layer, and then to calculate in the output error of front one deck.The partial derivative of the reverse propagated error produced in back-propagation algorithm operational process equals later layer network real output value and deducts later layer network objectives output valve.When running back-propagation algorithm and propagated forward algorithm at the same time, multinuclear process parallel organization is used to carry out multi-threading parallel process.When carrying out multi-threading parallel process, by the back-propagation phase of neural network, have a contextual processing to return first thread, the weighted value converted in handoff procedure calculates and synchronously carries out in each thread.
The invention provides a kind of convolutional neural networks structure of new network topological structure, design structure and algorithm model comprise the steps:
First neural network continues to use consistent method for designing, namely multiple perceptron model is created, this model is for solving Nonlinear separability problem, but due to the restriction of perceptron learning algorithm, pattern classification ability is very limited, must use backpropagation (BackPropagation) learning algorithm, this algorithm is a kind of network learning method had the greatest impact, and uses back-propagation algorithm based on this.The present invention's protection be exactly the computation schema improving this algorithm, for improving in neural net layer and the structure of node interconnect, strengthen the learning ability that hidden layer neuron has and a kind of novel neural network configuration method proposed.Even if we see improving in neural net layer still there is circumscribed problem with the structure of node interconnect from experimentation, at this moment we improve existing back-propagation algorithm structure, and network structure is introduced the design of propagated forward (ForwardPropagation).Propagated forward algorithm is quoted in patent, this algorithm utilizes the concurrency of neuron computes in each layer of neural network, every layer uses a kernel function to carry out the neuronic value of parallel computation this layer, and each kernel function is optimized according to the characteristic of neural network and the feature of overall architecture.
And then based on the concept of above model, for obtaining more excellent classification performance and design feature, for improving the design of neural network and changing network topology structure.(simultaneous) runs training and the calculating that backpropagation and propagated forward carry out convolutional neural networks simultaneously, in this technology, in experimentation, we use multiple thread to be used to the time required for speed-up computation epoch thus reduce total duration of training.
Neural network back-propagation algorithm is the process of an iteration, moves rearwardly through middle layer and reach ground floor from last one deck.Assuming that every one deck we know output error at this layer, if know that output error is just not difficult to calculate the change of weight to reduce error.Problem is that we can only the output of malobservation in the end one deck, thus proposes the key that needs to deal with problems.Backpropagation provides a method and is used for determining that the output error at front one deck gives the output of current layer.Therefore this is an iterative process: from lower one deck and calculate weight change to later layer.And then the output error calculated at front one deck, so repeat.All neural network models are inherently identical at algorithm, both the most substantially realize object: by adjusting the difference of weight, accelerate the back-propagation process that it converges to optimal weights.Test in general sample set, such as MNIST database, the computation process of our epoch needs the quantity up to 60000 backpropagations, even if the calculating completing an iteration epoch under the CPU processing speed of current higher speed is consuming time reach nearly 40 minutes, if the training process of a convolutional neural networks completes training in the face of a huge sample set, such as several thousand calculating of even crossing ten thousand iteration will there will be immeasurable computing time, namely because such computation process consuming time impels us must find the processing mode of a kind of more sane robust and fast and reliable in the work of reality.
Secondly, convolutional neural networks realizes relatively simple at the design aspect of node level, has both been forward-propagating and backpropagation training period.The present invention have selected simple innovative design method and add forward-propagating while backpropagation training.Propagated forward is the computation process of each neuron output value, obtains output valve based on the input value provided via neuron computes.Because network calculations has adjustment Synaptic junction weights with the ability of the change that conforms, especially trained in specific sample collection environment network can retraining easily to adapt to subtle change, make that there is good adaptability.Carry out this conformability that process that is reverse and propagated forward algorithm more accelerates network retraining at the same time, the design of this topology of networks makes performance be able to very large lifting.The problem taking into full account calculating duration must cut key point, key issue is that our network structure design depends at back-propagation algorithm the numerical value that in forward-propagating step, single neuron computes exports, such as export the input pattern that numerical value enters first thread, export generation causing the follow-up neuron accepting each output layer of neural network of this numerical value and calculate change, while calculating, back-propagation phase calculates and also starts thereupon, a formula is had to be used for calculating the reverse propagated error occurred in neural network computing process, the partial derivative of back-propagation process error equals later layer network real output value and deducts later layer network objectives output valve, this result is used for the numerical value upgrading weight change by us, wherein the error signal of hidden layer will determine according to recursive calculation after all neuron error direction of signal having the neuron of hidden layer to be directly connected.The innovative design method that patent of the present invention proposes adds the computation process of forward-propagating while backpropagation training, only has an object like this: accelerate it and converge to optimal weights.
Finally, in order to the speed of convergence speedup, use the mechanism simultaneously running process.Just because of the learning process of backpropagation and forward-propagating is with a kind of serial mode and does the calculating of a large amount of relevant weight renewal, period the process calculating then pattern that relates to a pattern renewal, calculate faster under hardware environment by means of CPU or GPU of present large data handling capacity, particularly when training dataset is very large and training sample high redundancy when all the more so.Have computing power has efficient search aspect to weight space at the same time, backpropagation and forward-propagating study is used in the convolutional neural networks mechanism design of pattern classification, first thread propagated forward input pattern when a simple multithreading, comprises each layer neural network output layer neuron and exports.Back-propagation phase starts, and how the equation weights mistake be used in computation schema depends on the output of output layer.Meanwhile, due to multithreading, a thread selects Fast Convergent and propagated forward, by the back-propagation phase of neural network, has a contextual processing to return first thread, and the calculating having attempted now first pattern is propagated.Now evaluation depends on the output of neuronic numerical value, because numerical value changes under the effect of the forward-propagating of the second thread, under multithreading, how to explain such problem, and solution remembers all neuronic output.In other words memory institute likely thousands of neuronic outputs and be stored in different core positions under an input pattern of propagated forward, these memories are used during exporting back-propagation phase.Be stored owing to exporting, backpropagation can not be affected in the forward-propagating of another thread of different patterns and calculate.The weight of calculated value is depended in convergence, and the value of weight may be changed by the backpropagation of another thread at next thread.Continue above-mentioned explanation, first thread starts backpropagation, and this is synchronous in two threads with the weighted value of change.In this, there is a contextual processing to second thread, and attempt calculating new weights, also attempt the weighted value propagated forward according to equation change simultaneously.But because network accepts optimum weight results of the most easily restraining, be accepted if change weight by first thread now, so calculate the second thread execution also just invalid, we only select numerical value optimum in thread upgrades.
Good and have in well trained environment in specific sample collection data, add the improvement of computer hardware environment, use the characteristic of intrinsic polycaryon processor parallel organization and parallel processing also greatly to improve the computing velocity of convolutional neural networks, powerful arithmetic speed is carried out the optimal design of the network topology structure of backpropagation and propagated forward algorithm in addition simultaneously and makes training and computation process give play to good result.

Claims (5)

1. a building method for the convolutional neural networks of new network topological structure, is characterized in that, comprises the steps:
Determine the interconnect architecture between neuron node, determine the mutual relationship that numerical evaluation is fed back and forward calculation is transmitted between neuron node according to interconnect architecture;
Use reverse transmittance nerve network to create multiple perceptron model, in described reverse transmittance nerve network structure, comprise propagated forward algorithm;
Run back-propagation algorithm and propagated forward algorithm simultaneously, carry out training and the calculating of convolutional neural networks.
2. the building method of the convolutional neural networks of new network topological structure as claimed in claim 1, it is characterized in that, the concrete grammar that back-propagation algorithm carries out interative computation comprises following repetitive process: from lower one deck and calculate weight change to later layer, and then to calculate in the output error of front one deck.
3. the building method of the convolutional neural networks of new network topological structure as claimed in claim 1 or 2, it is characterized in that, the partial derivative of the reverse propagated error produced in back-propagation algorithm operational process equals later layer network real output value and deducts later layer network objectives output valve.
4. the building method of the convolutional neural networks of new network topological structure as claimed in claim 3, is characterized in that, when running back-propagation algorithm and propagated forward algorithm at the same time, uses multinuclear process parallel organization to carry out multi-threading parallel process.
5. the building method of the convolutional neural networks of new network topological structure as claimed in claim 4, it is characterized in that, when carrying out multi-threading parallel process, by the back-propagation phase of neural network, have a contextual processing to return first thread, the weighted value converted in handoff procedure calculates and synchronously carries out in each thread.
CN201510908355.5A 2015-12-09 2015-12-09 Method for constructing convolution neural network in novel network topological structure Pending CN105550749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510908355.5A CN105550749A (en) 2015-12-09 2015-12-09 Method for constructing convolution neural network in novel network topological structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510908355.5A CN105550749A (en) 2015-12-09 2015-12-09 Method for constructing convolution neural network in novel network topological structure

Publications (1)

Publication Number Publication Date
CN105550749A true CN105550749A (en) 2016-05-04

Family

ID=55829930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510908355.5A Pending CN105550749A (en) 2015-12-09 2015-12-09 Method for constructing convolution neural network in novel network topological structure

Country Status (1)

Country Link
CN (1) CN105550749A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327251A (en) * 2016-08-22 2017-01-11 北京小米移动软件有限公司 Model training system and model training method
CN108205701A (en) * 2016-12-20 2018-06-26 联发科技股份有限公司 A kind of system and method for performing convolutional calculation
CN108446758A (en) * 2018-02-11 2018-08-24 江苏金羿智芯科技有限公司 A kind of serial flow processing method of Neural Network Data calculated towards artificial intelligence
CN108491924A (en) * 2018-02-11 2018-09-04 江苏金羿智芯科技有限公司 A kind of serial stream treatment device of Neural Network Data calculated towards artificial intelligence
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN109886407A (en) * 2019-02-27 2019-06-14 上海商汤智能科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN110147872A (en) * 2018-05-18 2019-08-20 北京中科寒武纪科技有限公司 Code storage device and method, processor and training method
CN110688159A (en) * 2017-07-20 2020-01-14 上海寒武纪信息科技有限公司 Neural network task processing system
CN111126590A (en) * 2016-12-23 2020-05-08 中科寒武纪科技股份有限公司 Artificial neural network operation device and method
CN111539521A (en) * 2020-05-25 2020-08-14 上海华力集成电路制造有限公司 Method for predicting yield of semiconductor product by neural network error-back propagation algorithm
CN111552563A (en) * 2020-04-20 2020-08-18 南昌嘉研科技有限公司 Multithreading data architecture, multithreading message transmission method and system
CN113626652A (en) * 2021-10-11 2021-11-09 北京一流科技有限公司 Data processing network system, data processing network deployment system and method thereof
US11995556B2 (en) 2018-05-18 2024-05-28 Cambricon Technologies Corporation Limited Video retrieval method, and method and apparatus for generating video retrieval mapping relationship

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217214A (en) * 2014-08-21 2014-12-17 广东顺德中山大学卡内基梅隆大学国际联合研究院 Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method
EP2833295A2 (en) * 2013-07-31 2015-02-04 Fujitsu Limited Convolutional-neural-network-based classifier and classifying method and training methods for the same
CN104463324A (en) * 2014-11-21 2015-03-25 长沙马沙电子科技有限公司 Convolution neural network parallel processing method based on large-scale high-performance cluster
CN104731709A (en) * 2015-03-31 2015-06-24 北京理工大学 Software defect predicting method based on JCUDASA_BP algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2833295A2 (en) * 2013-07-31 2015-02-04 Fujitsu Limited Convolutional-neural-network-based classifier and classifying method and training methods for the same
CN104217214A (en) * 2014-08-21 2014-12-17 广东顺德中山大学卡内基梅隆大学国际联合研究院 Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method
CN104463324A (en) * 2014-11-21 2015-03-25 长沙马沙电子科技有限公司 Convolution neural network parallel processing method based on large-scale high-performance cluster
CN104731709A (en) * 2015-03-31 2015-06-24 北京理工大学 Software defect predicting method based on JCUDASA_BP algorithm

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327251A (en) * 2016-08-22 2017-01-11 北京小米移动软件有限公司 Model training system and model training method
CN109564637B (en) * 2016-09-30 2023-05-12 国际商业机器公司 Methods, systems, and media for scalable streaming synaptic supercomputers
CN109564637A (en) * 2016-09-30 2019-04-02 国际商业机器公司 Expansible stream cynapse supercomputer for extreme handling capacity neural network
CN108205701A (en) * 2016-12-20 2018-06-26 联发科技股份有限公司 A kind of system and method for performing convolutional calculation
CN108205701B (en) * 2016-12-20 2021-12-28 联发科技股份有限公司 System and method for executing convolution calculation
CN111126590A (en) * 2016-12-23 2020-05-08 中科寒武纪科技股份有限公司 Artificial neural network operation device and method
CN111126590B (en) * 2016-12-23 2023-09-29 中科寒武纪科技股份有限公司 Device and method for artificial neural network operation
CN110688159A (en) * 2017-07-20 2020-01-14 上海寒武纪信息科技有限公司 Neural network task processing system
CN108446758B (en) * 2018-02-11 2021-11-30 江苏金羿智芯科技有限公司 Artificial intelligence calculation-oriented neural network data serial flow processing method
CN108491924A (en) * 2018-02-11 2018-09-04 江苏金羿智芯科技有限公司 A kind of serial stream treatment device of Neural Network Data calculated towards artificial intelligence
CN108491924B (en) * 2018-02-11 2022-01-07 江苏金羿智芯科技有限公司 Neural network data serial flow processing device for artificial intelligence calculation
CN108446758A (en) * 2018-02-11 2018-08-24 江苏金羿智芯科技有限公司 A kind of serial flow processing method of Neural Network Data calculated towards artificial intelligence
CN110147872A (en) * 2018-05-18 2019-08-20 北京中科寒武纪科技有限公司 Code storage device and method, processor and training method
US11995556B2 (en) 2018-05-18 2024-05-28 Cambricon Technologies Corporation Limited Video retrieval method, and method and apparatus for generating video retrieval mapping relationship
CN109886407A (en) * 2019-02-27 2019-06-14 上海商汤智能科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN109886407B (en) * 2019-02-27 2021-10-22 上海商汤智能科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN111552563A (en) * 2020-04-20 2020-08-18 南昌嘉研科技有限公司 Multithreading data architecture, multithreading message transmission method and system
CN111539521A (en) * 2020-05-25 2020-08-14 上海华力集成电路制造有限公司 Method for predicting yield of semiconductor product by neural network error-back propagation algorithm
CN113626652A (en) * 2021-10-11 2021-11-09 北京一流科技有限公司 Data processing network system, data processing network deployment system and method thereof

Similar Documents

Publication Publication Date Title
CN105550749A (en) Method for constructing convolution neural network in novel network topological structure
Li et al. Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs
Xuan et al. Multi-model fusion short-term load forecasting based on random forest feature selection and hybrid neural network
WO2021175058A1 (en) Neural network architecture search method and apparatus, device and medium
Wang et al. Liquid state machine based pattern recognition on FPGA with firing-activity dependent power gating and approximate computing
CN110675954A (en) Information processing method and device, electronic equipment and storage medium
Yan et al. Study on deep unsupervised learning optimization algorithm based on cloud computing
CN110009108A (en) A kind of completely new quantum transfinites learning machine
CN105678401A (en) Global optimization method based on strategy adaptability differential evolution
CN109146000A (en) A kind of method and device for improving convolutional neural networks based on frost weight
CN115545970A (en) Power grid fault analysis method, system, equipment and medium based on digital twinning
Zhang et al. Implementation and optimization of the accelerator based on FPGA hardware for LSTM network
CN105550748A (en) Method for constructing novel neural network based on hyperbolic tangent function
CN113157919A (en) Sentence text aspect level emotion classification method and system
Guo et al. AI-oriented smart power system transient stability: the rationality, applications, challenges and future opportunities
Pati et al. Demystifying bert: System design implications
Niu et al. Research Progress of spiking neural network in image classification: a review
JP2023007366A (en) Molecular structure acquiring method, apparatus, electronic device, and storage medium
CN116595356B (en) Time sequence signal prediction method and device, electronic equipment and storage medium
Ahmadi et al. A GPU based simulation of multilayer spiking neural networks
Nambiar et al. Optimization of structure and system latency in evolvable block-based neural networks using genetic algorithm
Kang et al. Hardware-aware liquid state machine generation for 2D/3D Network-on-Chip platforms
Guo Financial market sentiment prediction technology and application based on deep learning model
Zaman General Intelligent Network (GIN) and Generalized Machine Learning Operating System (GML) for Brain-Like Intelligence
Xue et al. Deep reinforcement learning based ontology meta-matching technique

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160504