CN110378335B - Information analysis method and model based on neural network - Google Patents

Information analysis method and model based on neural network Download PDF

Info

Publication number
CN110378335B
CN110378335B CN201910522299.XA CN201910522299A CN110378335B CN 110378335 B CN110378335 B CN 110378335B CN 201910522299 A CN201910522299 A CN 201910522299A CN 110378335 B CN110378335 B CN 110378335B
Authority
CN
China
Prior art keywords
neural network
matching
text
information
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910522299.XA
Other languages
Chinese (zh)
Other versions
CN110378335A (en
Inventor
王越胜
丁靓靓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pengtai Electric Power Design Consulting Co ltd
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910522299.XA priority Critical patent/CN110378335B/en
Publication of CN110378335A publication Critical patent/CN110378335A/en
Application granted granted Critical
Publication of CN110378335B publication Critical patent/CN110378335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an information analysis method and a model based on a neural network, which comprises the following steps: s01: dividing the picture and extracting a candidate area; s02: identifying and classifying the extracted picture candidate area by using a convolutional neural network; s03: matching the text with the classified candidate area information by using a recurrent neural network; s04: a textual description of the picture is generated. The substantial effects of the invention include: the method has the advantages of wide application range, perfect detection angle and mode, capability of outputting visual characters to express picture information, acceleration of obtaining the picture information and improvement of detection accuracy.

Description

Information analysis method and model based on neural network
Technical Field
The invention relates to the technical field of computer vision, in particular to an information analysis method and a model based on a neural network.
Background
With the rapid development of the internet, information received by people also grows exponentially, information receiving modes and types also show diversified trends, the most direct one is picture information, certain disadvantages exist, picture information in various shapes exists in the network, bad information in the picture information can bring negative effects to people, and the spreading of the things can affect people. It is important to detect the information content of these pictures.
The invention discloses a microblog picture sensitive information detection method based on an authorization notice number CN103020651B, which comprises the following steps: the method comprises the steps of establishing a sensitive word stock, a font stock and a color stock, receiving N pieces of microblog pictures to be detected, establishing a sensitive information list, traversing to detect whether the microblog pictures contain sensitive information or not, generating a sensitive information picture stock corresponding to the microblog pictures according to the size of the current microblog pictures and the sensitive word stock, the font stock and the color stock, traversing to match the microblog pictures, judging whether the sensitive information exists or not according to the maximum value in the matching degree of image blocks at the traversing positions on the microblog pictures and the sensitive information pictures, and storing the matched information into the sensitive information list.
In the prior art, the character information of the picture is mainly detected, but the information transmitted by the picture can not be effectively analyzed, and the detection of the image content in the picture can not be done, so that the possibility of missing detection exists.
Disclosure of Invention
Aiming at the problem that image information in a picture cannot be analyzed and detected in the prior art, the invention provides an information analysis method and a model based on a neural network.
The technical scheme of the invention is as follows.
An information analysis method based on a neural network comprises the following steps: s01: dividing the picture and extracting a candidate area; s02: identifying and classifying the extracted picture candidate area by using a convolutional neural network; s03: matching the text with the classified candidate area information by using a recurrent neural network; s04: a textual description of the picture is generated.
Preferably, the specific process of step S01 includes: and dividing the picture by adopting an RPN network, marking an information frame on the part to be extracted on the picture, returning a score, and comparing and judging through the returned score value and a preset threshold value, wherein the part which is larger than or equal to the threshold value is a target candidate region meeting the requirement.
Preferably, the specific process of step S02 includes: and adding a weight and a weight bias coefficient in front of each node of the convolutional neural network, transmitting the candidate region information input by the input layer to the hidden layer after corresponding to the weight and the weight bias coefficient, and outputting the hidden layer after corresponding to the weight and the weight bias coefficient. The weights and weight bias coefficients are added to the convolutional neural network, so that people can clearly know which conditions are more serious, the classified targets can obviously distinguish which targets are more unfavorable to the people, and the classification is more effective.
Preferably, the specific process of step S03 includes: and training the text and the candidate area information by using a long-time memory network of the recurrent neural network, and performing matching of the text and the candidate area information. The traditional cyclic neural network adopts a gradient descent algorithm, the network is expanded according to time, namely when a hidden layer is too long, a certain time difference exists when one neuron acts on the next neuron backwards to output, so that the previous output contact is not tight, and the training content is forgotten.
Preferably, the specific process of step S04 includes: and extracting the non-candidate area part in the picture through the RCNN, executing the step S03 again, and then collecting the information and the text of the candidate area and the non-candidate area to generate the word description. And performing final learning training on different areas and corresponding characters in the plurality of groups of pictures by using a recurrent neural network, and outputting the content contained in the picture in a form of candidate area information and non-candidate area information.
Preferably, the convolution neural network output calculation formula is as follows:
y=σ(∑wixi+b)
where y denotes the output of the entire network, σ denotes the activation function, wiRepresenting the weight of the calculation between neurons, xiRepresenting the neuron and b representing the weight bias coefficient. In the whole convolutional neural network, forward and backward propagation algorithms are adopted. Each input node corresponds to one node in the first hidden layer, so that layer-by-layer connection is achieved, a weight is added in front of each node, the proportion of a concerned part can be effectively increased, and a weight bias coefficient is added in front of each layer, so that the finally obtained target classification is more accurate.
The technical scheme also comprises an information analysis model based on the neural network, which is formed by the method.
The substantial effects of the invention include: the method has the advantages of wide application range, perfect detection angle and mode, capability of outputting visual characters to express picture information, acceleration of obtaining the picture information and improvement of detection accuracy.
Detailed Description
The technical solution is further described with reference to specific examples.
Example (b):
an information analysis method and a model based on a neural network comprise the following steps: s01: dividing the picture and extracting a candidate area; the specific process of step S01 includes: and dividing the picture by adopting an RPN network, marking an information frame on the part to be extracted on the picture, returning a score, and comparing and judging through the returned score value and a preset threshold value, wherein the part which is larger than or equal to the threshold value is a target candidate region meeting the requirement.
S02: identifying and classifying the extracted picture candidate area by using a convolutional neural network; the specific process of step S02 includes: and adding a weight and a weight bias coefficient in front of each node of the convolutional neural network, transmitting the candidate region information input by the input layer to the hidden layer after corresponding to the weight and the weight bias coefficient, and outputting the hidden layer after corresponding to the weight and the weight bias coefficient. The weights and weight bias coefficients are added to the convolutional neural network, so that people can clearly know which conditions are more serious, the classified targets can obviously distinguish which targets are more unfavorable to the people, and the classification is more effective.
The convolution neural network output calculation formula is as follows:
y=σ(∑wixi+b)
where y denotes the output of the entire network, σ denotes the activation function, wiRepresenting the weight of the calculation between neurons, xiRepresenting the neuron and b representing the weight bias coefficient. In the whole convolutional neural network, forward and backward propagation algorithms are adopted. Each input node corresponds to one node in the first hidden layer, so that layer-by-layer connection is achieved, a weight is added in front of each node, the proportion of a concerned part can be effectively increased, and a weight bias coefficient is added in front of each layer, so that the finally obtained target classification is more accurate.
For example, with x1,···,xnRepresenting the input layer of the network, by h1,···,hnAnd representing a hidden layer of the network, and representing a final output result by y, wherein the last +1 neuron of each layer represents a bias neuron, and the bias is represented by b in algorithm calculation. For the convenience of calculation, we only calculate the first few neurons of the input layer, denoted by w, the slave inputWeight from ingress to hidden layer and h1,h2,h3Representing the output between each layer separately, then for the hidden layer h1Its corresponding output can be found as:
Figure BDA0002097101600000031
in the above formula, each h coefficient represents the output from each layer to the next layer, i.e., the output on the left side in the above formula, and w represents the weight coefficient of the output calculated between neurons, where we define
Figure BDA0002097101600000032
Represents the weight coefficients between the first neuron in the second layer and the first neuron in the previous layer, and, as such,
Figure BDA0002097101600000033
then representing the weight coefficient between the first neuron of the second layer and the second neuron of the previous layer, wherein x represents the corresponding input neuron in the input layer, b represents the offset corresponding to the neuron of each layer, and finally, calculating the relation between the weight coefficient and the input according to the principle of matrix operation to obtain the final hierarchical output formula. Calculating in turn, the output expression formula of each level neuron in the neural network can be obtained as follows:
Figure BDA0002097101600000034
Figure BDA0002097101600000035
the parameters in the above two formulas are the same as those in the former formula, other hidden layers are omitted in the formula construction process, only the parameter transmission process of the first three neurons of each layer is analyzed and discussed, and the expression formula of the last output layer y can be derived by the neuron transmission formula as follows:
Figure BDA0002097101600000041
the formula is subjected to extension processing, a simple neural network structure is extended to a complex multilayer neural network structure, i neurons are arranged on an L-1 layer, j neurons are arranged on an L layer, and then an output expression of each layer on the L layer is obtained as follows:
Figure BDA0002097101600000042
s03: matching the text with the classified candidate area information by using a recurrent neural network; the specific process of step S03 includes: and training the text and the candidate area information by using a long-time memory network of the recurrent neural network, and performing matching of the text and the candidate area information. The traditional cyclic neural network adopts a gradient descent algorithm, the network is expanded according to time, namely when a hidden layer is too long, a certain time difference exists when one neuron acts on the next neuron backwards to output, so that the previous output contact is not tight, and the training content is forgotten.
S04: a textual description of the picture is generated. The specific process of step S04 includes: and extracting the non-candidate area part in the picture through the RCNN, executing the step S03 again, and then collecting the information and the text of the candidate area and the non-candidate area to generate the word description. And performing final learning training on different areas and corresponding characters in the plurality of groups of pictures by using a recurrent neural network, and outputting the content contained in the picture in a form of candidate area information and non-candidate area information.
The process of generating the character description is formed by combining and training a text alignment algorithm and a language generation algorithm, after characteristics of a language mode and an image mode are extracted and vectorized respectively, the characteristics of vectors are fused in a full connection layer to obtain corresponding vectors of an image emotion category region and a text field, corresponding blocks from the image region to the text field are generated, a regression equation of a loss function is established according to the fit degree scores of the two, and finally a gradient descent method is applied to regression iteration to update a minimized objective function and reversely update parameters of the whole network, so that an optimized image text matching region is obtained. After the alignment work of the target and the text field is finished, the set pair of the picture vector and the corresponding text vector is input into a language generation network, and the text fields are sequenced, combined and listed at the position of each field side by side to obtain the final description sentence.
For example, using as input an image vector and a corresponding text field feature vector, here using { x }1,x2,…xnDenotes a text vector, xtThe feature vector representing the t-th text field, the input image vector is obtained by step S02, and the expression between each level and the corresponding output is obtained as
bv=Whi·vec
ht=σt[Whxxt+Whhht-1+bh+(t=1)⊙bv]
yt=softmax(Wohht+bo)
In the above formula, the parameters to be learned include Whi,Whx,Whh,Woh,bhAnd boFrom the output h with the previous information at the previous momentt-1Plus input information x at the current timetThe predicted output of the current time, here y, is obtainedtAnd indicating, adding a specific offset for adjustment, and so on to obtain a prediction field corresponding to the previous moment at each moment until an end mark appears, namely the image text set pair is used up, thereby completing the whole language generation process. In the process of generating the language description, the input set comprises a candidate region and a non-candidate region, and the detection result is input as a text field vector and finally embodied in the language description.
In the text alignment model, the matching field score of each region is given in the alignment process, the matching degree of the image region and the text field is represented by different scores, the higher the score is, the more matching is represented, and the final matching score formula is obtained for the text in the algorithm process.
Figure BDA0002097101600000051
In the above formula, the block of the image area set and the text field set is gk、glExpressed in the form of an inner product
Figure BDA0002097101600000052
Obtaining the matching result of the image area k and the text field l as the matching metric of the two, and using SklShowing that the formula is simplified to obtain
Figure BDA0002097101600000053
The above equation means that each text field is aligned only with the most matched image region, i.e., with the image region with the highest matching score, resulting in a final structured loss function of
Figure BDA0002097101600000054
In the loss function formula, the text field with the highest matching score corresponding to the image area and the real label corresponding to the image area are locally compared in a loss layer, so that SlkAnd SklThe gap is as small as possible so that the matching score between the image region and the text field is highest. And in the process of local comparison between the matching region in the loss layer and the real label thereof, calculating to obtain a vector distance difference value between the matching region and the real label as an input item of a loss function, minimizing the loss function, and optimizing the model and updating the network parameters.
And finally, completing learning training and parameter optimization of the whole network by minimizing the loss function through a random gradient descent method to obtain a final analysis model.
It should be noted that the specific examples are only used for further illustration of the technical solution and are not used for limiting the scope of the technical solution, and any modification, equivalent replacement, improvement and the like based on the technical solution should be considered as being within the protection scope of the present invention.

Claims (6)

1. An information analysis method based on a neural network is characterized by comprising the following steps:
s01: dividing the picture and extracting a candidate area;
s02: identifying and classifying the extracted picture candidate area by using a convolutional neural network;
s03: matching the text with the classified candidate area information by using a recurrent neural network;
s04: extracting the non-candidate area part in the picture through the RCNN, executing the step S03 again, and then collecting information and texts of the candidate area and the non-candidate area to generate a word description;
the process of generating the character description is formed by combining and training a text alignment algorithm and a language generation algorithm, after characteristics of a language mode and an image mode are extracted and vectorized respectively, the characteristics of vectors are fused in a full connection layer to obtain corresponding vectors of an image emotion category region and a text field, corresponding blocks from the image region to the text field are generated, a regression equation of a loss function is established according to the fit degree scores of the two, and finally a gradient descent method is applied to regression iteration to update a minimized objective function and reversely update parameters of the whole network so as to obtain an optimized image text matching region; after the alignment work of a target image area and a text field is finished, inputting a set pair of a picture vector and a corresponding text vector into a language generation network, ordering each text field, combining and listing the position of each field side by side to obtain a final description sentence, in a text alignment model, giving a matching field score of each area in the alignment process, expressing the matching degree of the image area and the text field by different scores, expressing the matching more when the score is higher, and expressing the matching more in a text alignment algorithm process to obtain a final matching score formula:
Figure FDA0003238700120000011
in the above formula, the block of the image area set and the text field set is gk、glExpressed in the form of an inner product
Figure FDA0003238700120000012
Obtaining the matching result of the image area k and the text field l as the matching metric of the two, and using SklShowing that the formula is simplified to obtain
Figure FDA0003238700120000013
The above equation means that each text field is aligned only with the most matched image region, i.e., with the image region with the highest matching score, resulting in a final structured loss function of
Figure FDA0003238700120000014
In a loss function formula, locally comparing a text field with the highest matching score corresponding to an image region with a real label corresponding to the image region on a loss layer to enable the matching score between the image region and the text field to be the highest; the local comparison process of the matching area in the loss layer and the real label is that the vector distance difference between the matching area and the real label is obtained through calculation and is used as an input item of a loss function, the loss function is minimized, and model optimization and network parameter updating are carried out.
2. The information analysis method based on neural network as claimed in claim 1, wherein the specific process of step S01 includes: and dividing the picture by adopting an RPN network, marking an information frame on the part to be extracted on the picture, returning a score, and comparing and judging through the returned score value and a preset threshold value, wherein the part which is larger than or equal to the threshold value is a target candidate region meeting the requirement.
3. The information analysis method based on neural network as claimed in claim 1 or 2, wherein the specific process of step S02 includes: and adding a weight and a weight bias coefficient in front of each node of the convolutional neural network, transmitting the candidate region information input by the input layer to the hidden layer after corresponding to the weight and the weight bias coefficient, and outputting the hidden layer after corresponding to the weight and the weight bias coefficient.
4. The information analysis method based on neural network as claimed in claim 1, wherein the specific process of step S03 includes: and training the text and the candidate area information by using a long-time memory network of the recurrent neural network, and performing matching of the text and the candidate area information.
5. The neural network-based information analysis method according to claim 3, wherein the convolutional neural network output calculation formula is:
y=σ(∑wixi+b)
where y denotes the output of the entire network, σ denotes the activation function, wiRepresenting the weight of the calculation between neurons, xiRepresenting the neuron and b representing the weight bias coefficient.
6. An information analysis model based on neural network, which is constructed by the information analysis method according to any one of claims 1 to 5.
CN201910522299.XA 2019-06-17 2019-06-17 Information analysis method and model based on neural network Active CN110378335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910522299.XA CN110378335B (en) 2019-06-17 2019-06-17 Information analysis method and model based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910522299.XA CN110378335B (en) 2019-06-17 2019-06-17 Information analysis method and model based on neural network

Publications (2)

Publication Number Publication Date
CN110378335A CN110378335A (en) 2019-10-25
CN110378335B true CN110378335B (en) 2021-11-19

Family

ID=68249014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910522299.XA Active CN110378335B (en) 2019-06-17 2019-06-17 Information analysis method and model based on neural network

Country Status (1)

Country Link
CN (1) CN110378335B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316064A (en) * 2017-06-26 2017-11-03 长安大学 A kind of asphalt pavement crack classifying identification method based on convolutional neural networks
CN107766894A (en) * 2017-11-03 2018-03-06 吉林大学 Remote sensing images spatial term method based on notice mechanism and deep learning
CN108960330A (en) * 2018-07-09 2018-12-07 西安电子科技大学 Remote sensing images semanteme generation method based on fast area convolutional neural networks
CN109376242A (en) * 2018-10-18 2019-02-22 西安工程大学 Text classification algorithm based on Recognition with Recurrent Neural Network variant and convolutional neural networks

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066446B (en) * 2017-04-13 2020-04-10 广东工业大学 Logic rule embedded cyclic neural network text emotion analysis method
US11188581B2 (en) * 2017-05-10 2021-11-30 Fmr Llc Identification and classification of training needs from unstructured computer text using a neural network
KR101930940B1 (en) * 2017-07-20 2018-12-20 에스케이텔레콤 주식회사 Apparatus and method for analyzing image
CN107808146B (en) * 2017-11-17 2020-05-05 北京师范大学 Multi-mode emotion recognition and classification method
CN108595601A (en) * 2018-04-20 2018-09-28 福州大学 A kind of long text sentiment analysis method incorporating Attention mechanism
CN108664632B (en) * 2018-05-15 2021-09-21 华南理工大学 Text emotion classification algorithm based on convolutional neural network and attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316064A (en) * 2017-06-26 2017-11-03 长安大学 A kind of asphalt pavement crack classifying identification method based on convolutional neural networks
CN107766894A (en) * 2017-11-03 2018-03-06 吉林大学 Remote sensing images spatial term method based on notice mechanism and deep learning
CN108960330A (en) * 2018-07-09 2018-12-07 西安电子科技大学 Remote sensing images semanteme generation method based on fast area convolutional neural networks
CN109376242A (en) * 2018-10-18 2019-02-22 西安工程大学 Text classification algorithm based on Recognition with Recurrent Neural Network variant and convolutional neural networks

Also Published As

Publication number Publication date
CN110378335A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
Lazaridou et al. Emergence of linguistic communication from referential games with symbolic and pixel input
CN113496217B (en) Method for identifying human face micro expression in video image sequence
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
Bendale et al. Towards open set deep networks
CN110008338B (en) E-commerce evaluation emotion analysis method integrating GAN and transfer learning
CN110390363A (en) A kind of Image Description Methods
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
CN109993102A (en) Similar face retrieval method, apparatus and storage medium
US10943352B2 (en) Object shape regression using wasserstein distance
CN112464808A (en) Rope skipping posture and number identification method based on computer vision
CN110727844B (en) Online commented commodity feature viewpoint extraction method based on generation countermeasure network
CN115526874B (en) Method for detecting loss of round pin and round pin cotter pin of brake adjuster control rod
CN113033587B (en) Image recognition result evaluation method and device, electronic equipment and storage medium
CN113127737B (en) Personalized search method and search system integrating attention mechanism
CN112527966A (en) Network text emotion analysis method based on Bi-GRU neural network and self-attention mechanism
CN110598737B (en) Online learning method, device, equipment and medium of deep learning model
CN114218457B (en) False news detection method based on forwarding social media user characterization
CN117313709B (en) Method for detecting generated text based on statistical information and pre-training language model
CN110378335B (en) Information analysis method and model based on neural network
CN116402811B (en) Fighting behavior identification method and electronic equipment
Audhkhasi et al. Data-dependent evaluator modeling and its application to emotional valence classification from speech.
CN115878804B (en) E-commerce evaluation multi-classification emotion analysis method based on AB-CNN model
CN116228989A (en) Three-dimensional track prediction method, device, equipment and medium
Tchuiev et al. Epistemic uncertainty aware semantic localization and mapping for inference and belief space planning
CN114168769A (en) Visual question-answering method based on GAT (generic object transform) relational reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230928

Address after: Room 118, 1st Floor, Building 8, No. 19 Jugong Road, Xixing Street, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Patentee after: Hangzhou Pengtai Electric Power Design Consulting Co.,Ltd.

Address before: 310018 Xiasha Higher Education Zone, Hangzhou, Zhejiang, Jianggan District

Patentee before: HANGZHOU DIANZI University

TR01 Transfer of patent right