CN109460812B - Intermediate information analysis device, optimization device, and feature visualization device for neural network - Google Patents

Intermediate information analysis device, optimization device, and feature visualization device for neural network Download PDF

Info

Publication number
CN109460812B
CN109460812B CN201710794559.XA CN201710794559A CN109460812B CN 109460812 B CN109460812 B CN 109460812B CN 201710794559 A CN201710794559 A CN 201710794559A CN 109460812 B CN109460812 B CN 109460812B
Authority
CN
China
Prior art keywords
neural network
intermediate information
hidden state
long
term memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710794559.XA
Other languages
Chinese (zh)
Other versions
CN109460812A (en
Inventor
尹汭
谭志明
白向晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN201710794559.XA priority Critical patent/CN109460812B/en
Priority to JP2018117650A priority patent/JP7047620B2/en
Publication of CN109460812A publication Critical patent/CN109460812A/en
Application granted granted Critical
Publication of CN109460812B publication Critical patent/CN109460812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the invention provides an intermediate information analysis device, an optimization device, a characteristic visualization device and method and electronic equipment of a neural network, wherein the hidden state output by a long and short term memory layer in the neural network is simplified through a dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, input data of the neural network and an output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; the parameters of the neural network are optimized according to the intermediate information, the application effect of the neural network can be effectively improved, in addition, the visualization of the characteristics contained in the hidden state can be realized according to the intermediate information, and the study of the neural network with a long-term and short-term memory layer are facilitated.

Description

Intermediate information analysis device, optimization device, and feature visualization device for neural network
Technical Field
The invention relates to the technical field of information, in particular to a neural network intermediate information analysis device, an optimization device, a characteristic visualization device and method and electronic equipment.
Background
In recent years, deep learning based on neural networks has been widely applied to the field of machine vision. As one of the neural networks, a Long Short-Term Memory (LSTM) network, that is, a neural network having a Long Short-Term Memory layer, has a better effect in the field of event detection of video than other methods. This is because neural networks with long and short term memory layers can remember useful information over time and forget useless information. The key point of the neural network with the long-short term memory layer is that the information output by the long-short term memory layer is called a hidden state, and the hidden state is a multidimensional matrix representing the features extracted from the long-short term memory layer.
It should be noted that the above background description is only for the sake of clarity and complete description of the technical solutions of the present invention and for the understanding of those skilled in the art. Such solutions are not considered to be known to the person skilled in the art merely because they have been set forth in the background section of the invention.
Disclosure of Invention
However, although the neural network with the long and short term memory layer can be used for event detection and other applications, the neural network with the long and short term memory layer is equivalent to a black box for people, and people cannot essentially analyze the long and short term memory layer, and only can explain some attempts of the neural network with the long and short term memory layer from the perspective of some experience and mathematical principles.
The embodiment of the invention provides an intermediate information analysis device, an optimization device, a characteristic visualization device and method and electronic equipment of a neural network, wherein the hidden state output by a long and short term memory layer in the neural network is simplified through a dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, input data of the neural network and an output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; the parameters of the neural network are optimized according to the intermediate information, the application effect of the neural network can be effectively improved, in addition, the visualization of the characteristics contained in the hidden state can be realized according to the intermediate information, and the study of the neural network with a long-term and short-term memory layer are facilitated.
According to a first aspect of embodiments of the present invention, there is provided an apparatus for analyzing intermediate information of a neural network, the neural network having a long-short term memory layer, the apparatus including: an obtaining unit, configured to obtain a hidden state output by the long-term and short-term memory layer in the neural network; a simplification unit for simplifying the hidden state using a dimension reduction algorithm; a first analysis unit, configured to analyze the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network, and the output result of the neural network.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for optimizing a neural network, the neural network having a long-term and short-term memory layer, the apparatus comprising: an intermediate information analysis device of a neural network according to a first aspect of an embodiment of the present invention; the second analysis unit is used for analyzing the change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on the output result of the neural network according to the intermediate information of the neural network; and the optimization unit is used for optimizing the parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
According to a third aspect of embodiments of the present invention, there is provided a feature visualization apparatus of a neural network having a long-short term memory layer, the apparatus including: an intermediate information analysis device of a neural network according to a first aspect of an embodiment of the present invention; a visualization unit, configured to visually represent the physical meaning of the feature included in the hidden state output by the long-short term memory layer in the neural network and the change of the feature over time according to the intermediate information of the neural network.
According to a fourth aspect of the embodiments of the present invention, there is provided an electronic device including the intermediate information analysis apparatus of a neural network according to the first aspect of the embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided a method of analyzing intermediate information of a neural network, the neural network having a long-short term memory layer, the method including: obtaining a hidden state output by the long-term and short-term memory layer in the neural network; simplifying the hidden state by using a dimension reduction algorithm; and analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network.
According to a sixth aspect of embodiments of the present invention, there is provided a method of optimizing a neural network, the neural network having a long-short term memory layer, the method including: obtaining a hidden state output by the long-term and short-term memory layer in the neural network; simplifying the hidden state by using a dimension reduction algorithm; analyzing intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network; analyzing a change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on an output result of the neural network according to the intermediate information of the neural network; and optimizing parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
According to a seventh aspect of embodiments of the present invention, there is provided a method of visualizing features of a neural network having a long-short term memory layer, the method including: obtaining a hidden state output by the long-term and short-term memory layer in the neural network; simplifying the hidden state by using a dimension reduction algorithm; analyzing intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network; according to the intermediate information of the neural network, the physical meaning of the features contained in the hidden state output by the long-short term memory layer in the neural network and the change situation of the features with time are represented in a visualized form.
The invention has the beneficial effects that: simplifying the hidden state output by the long and short term memory layer in the neural network through a dimensionality reduction algorithm, analyzing intermediate information of the neural network according to the simplified hidden state, input data of the neural network and an output result of the neural network, and analyzing the principle of the neural network essentially according to the intermediate information, so that optimization and improvement can be performed on various applications of the neural network with the long and short term memory layer; the parameters of the neural network are optimized according to the intermediate information, the application effect of the neural network can be effectively improved, in addition, the visualization of the characteristics contained in the hidden state can be realized according to the intermediate information, and the study of the neural network with a long-term and short-term memory layer are facilitated.
Specific embodiments of the present invention are disclosed in detail with reference to the following description and drawings, indicating the manner in which the principles of the invention may be employed. It should be understood that the embodiments of the invention are not so limited in scope. The embodiments of the invention include many variations, modifications and equivalents within the spirit and scope of the appended claims.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments, in combination with or instead of the features of the other embodiments.
It should be emphasized that the term "comprises/comprising" when used herein, is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps or components.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 is a schematic diagram of an intermediate information analysis apparatus of a neural network according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a neural network structure according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of simplified unit 102 of embodiment 1 of the present invention;
FIG. 4 is a simplified hidden state diagram of embodiment 1 of the present invention;
fig. 5 is a frame image of each area in a hidden state in embodiment 1 of the present invention;
fig. 6 is a schematic diagram of an optimization apparatus of a neural network according to embodiment 2 of the present invention;
fig. 7 is a schematic diagram of a feature visualization device of a neural network of embodiment 3 of the present invention;
fig. 8 is a schematic view of an electronic apparatus of embodiment 4 of the present invention;
fig. 9 is a schematic block diagram of a system configuration of an electronic apparatus of embodiment 4 of the present invention;
FIG. 10 is a schematic view of an intermediate information analysis method of a neural network according to embodiment 5 of the present invention;
FIG. 11 is a schematic diagram of an optimization method of a neural network according to embodiment 6 of the present invention;
fig. 12 is a schematic diagram of a feature visualization method of a neural network according to embodiment 7 of the present invention.
Detailed Description
The foregoing and other features of the invention will become apparent from the following description taken in conjunction with the accompanying drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the embodiments in which the principles of the invention may be employed, it being understood that the invention is not limited to the embodiments described, but, on the contrary, is intended to cover all modifications, variations, and equivalents falling within the scope of the appended claims.
Example 1
The embodiment of the invention provides an intermediate information analysis device of a neural network, wherein the neural network is provided with a long-term and short-term memory layer. Fig. 1 is a schematic diagram of an intermediate information analysis device of a neural network according to embodiment 1 of the present invention. As shown in fig. 1, the neural network intermediate information analysis device 100 includes:
an obtaining unit 101, configured to obtain a hidden state output by the long-term and short-term memory layer in the neural network;
a simplification unit 102 for simplifying the hidden state by using a dimension reduction algorithm;
a first analyzing unit 103, configured to analyze the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network, and the output result of the neural network.
In this embodiment, the neural network may be any neural network having a long-term and short-term memory layer, and the structure of the neural network is not particularly limited in the embodiments of the present invention. The structure of the neural network is exemplarily described below.
Fig. 2 is a schematic diagram of a neural network structure of embodiment 1 of the present invention. As shown in fig. 2, the neural network 200 has a data input layer 201, a layer 202 of 5 convolutional layers, a first fully-connected layer 203, a long-short term memory layer (LSTM layer) 204, a second fully-connected layer 205, and an output layer 206.
For example, the data input to the neural network is a plurality of frames of video, the dimension of the data input to the data input layer 201 may be 16 × 3 × 227 × 227, where 16 denotes the number of input frames, 3 denotes that each frame has three color channels of red, green, and blue, the size of each frame is 227 × 227, the LSTM layer 204 has 16 LSTM units, which means that the LSTM layer 204 can memorize information for 16 consecutive time instants, the hidden state output by the LSTM layer 204 is a 16 × 256 matrix, 256 denotes the number of features extracted from the LSTM layer, and 16 denotes 16 LSTM units.
In this embodiment, the obtaining unit 101 obtains the hidden state output by the long-term and short-term memory layer in the neural network, for example, a 16 × 256 matrix output by the LSTM layer 204 in fig. 2 is obtained, each element in the matrix characterizes the hidden state, and the dimension of the feature included in the matrix is 256.
In this embodiment, the simplifying unit 102 simplifies the hidden state by using a dimension reduction algorithm. Among them, various dimensionality reduction algorithms can be used to perform the simplification process, such as a Principal Component Analysis (PCA) algorithm, an Independent Component Analysis (ICA) algorithm, or a Partial Least Squares (Partial Least Squares Method).
In this embodiment, the process of simplifying the processing will be described by taking a principal component analysis algorithm as an example.
Fig. 3 is a schematic diagram of simplified unit 102 according to embodiment 1 of the present invention. As shown in fig. 3, the simplifying unit 102 includes:
a building unit 301, configured to subtract an average value of a column where each element is located from each element in the matrix of the hidden state, and build a matrix according to a calculation result;
a first calculating unit 302 for calculating a covariance matrix of the matrix after the building;
a second calculation unit 303 for calculating eigenvalues of the covariance matrix;
a third calculation unit 304 for calculating a maximum preset number of eigenvectors of eigenvalues;
a fourth calculating unit 305, configured to calculate a projection matrix according to the eigenvector, and use the projection matrix as the simplified hidden state.
For example, the matrix X of hidden states output by the LSTM layer 204 may be expressed as:
Figure BDA0001400017550000061
where X represents the matrix of hidden states output by the LSTM layer, m represents the number of rows of the matrix, and each element in the matrix X represents a feature of the hidden state.
In the present embodiment, the building unit 301 subtracts the average value of the column where each element is located from each element in the matrix X, and builds a matrix according to the calculation result, for example, the built matrix can be obtained according to the following formulas (2) and (3)
Figure BDA0001400017550000062
Figure BDA0001400017550000063
Figure BDA0001400017550000064
Wherein the content of the first and second substances,
Figure BDA0001400017550000065
representing an element X in a matrix Xi,jSubtract the average of the column in which the element is located
Figure BDA0001400017550000066
As a result of the latter, the result,
Figure BDA0001400017550000067
representing the assembled matrix and m representing the number of rows of the matrix.
In this embodiment, the first calculating unit 302 calculates the covariance matrix of the matrix after being constructed, for example, the covariance matrix may be calculated according to the following formula (4):
Figure BDA0001400017550000068
wherein S represents the constructed matrix
Figure BDA0001400017550000069
M denotes the number of rows of the matrix,
Figure BDA00014000175500000610
representation matrix
Figure BDA00014000175500000611
The transposed matrix of (2).
In the present embodiment, the second calculation unit 303 calculates the eigenvalue of the covariance matrix, for example, the eigenvalue may be calculated according to the following formula (5):
|λE-S|=0 (5)
where λ represents an eigenvalue, E represents an identity matrix, and S represents a covariance matrix.
In the present embodiment, the third calculation unit 304 calculates a maximum feature vector of a preset number of feature values.
For example, the feature vector of the feature value may be calculated according to the following equation (6):
Sω-λω=0 (6)
where ω denotes an eigenvector, S denotes a covariance matrix, and λ denotes an eigenvalue.
In this embodiment, the preset number may be set according to actual needs, for example, the preset number is 3, and the feature vector of the maximum 3 feature values can already have more than 95% of the information of the original hidden state.
For example, the feature vector of the maximum 3 feature values may be represented by the following formula (7):
W=[ω123] (7)
where W represents the feature vector of the largest 3 feature values, ω123The eigenvectors representing the largest, second largest, and third largest eigenvalues, respectively.
In this embodiment, the fourth calculating unit 305 calculates a projection matrix according to the eigenvector, and takes the projection matrix as the simplified hidden state. For example, the projection matrix may be calculated according to the following equation (8):
Y=XW (8)
where Y denotes a projection matrix, X denotes a matrix of hidden states output by the LSTM layer, and W denotes a maximum eigenvector of 3 eigenvalues.
In the present embodiment, the size of the projection matrix Y is m × 3, so that the dimension of the feature of the hidden state is reduced from 256 to 3, the hidden state is greatly simplified, and the analysis of the intermediate information can be performed based on the simplified hidden state.
In the present embodiment, the first column element of the projection matrix Y is referred to as a first PCA component and is denoted as PCA1, the second column element is referred to as a second PCA component and is denoted as PCA2, the third column element is referred to as a third PCA component and is denoted as PCA3, and PCA1, PCA2, and PCA3 all represent the values of the hidden state.
In the present embodiment, the data contribution rate of each column element of the projection matrix decreases column by column, that is, the data contribution rates of PCA1, PCA2, and PCA3 gradually decrease for the projection matrix Y.
In this embodiment, the first analysis unit 103 analyzes the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network, and the output result of the neural network.
In this embodiment, the intermediate information may characterize the hidden state information output by the LSTM layer of the neural network.
In this embodiment, the specific analysis method may be determined according to the application scenario of the neural network, the form of the input data, and the actual needs. The following describes an exemplary process of analyzing the intermediate information of the neural network according to the present embodiment.
For example, the neural network is used to detect traffic events, and the input data is a plurality of frames of traffic monitoring video arranged in time sequence. Fig. 4 is a simplified hidden state diagram according to embodiment 1 of the present invention. As shown in fig. 4, the solid line represents a curve of PCA1 with time (number of frames), the dotted line represents a curve of PCA2 with time (number of frames), and the dotted line represents a curve of PCA3 with time (number of frames), and the entire hidden state is divided into seven regions (i) - (c) according to the changes of the three curves. One frame of image is selected in each area to be displayed for analysis.
Fig. 5 is a frame image of each area in a hidden state in embodiment 1 of the present invention. As shown in fig. 5, the frame images denoted by (r) - (c) are respectively selected from the frame images in the regions (r) - (c) in fig. 4.
In this embodiment, the output result of the neural network includes, for example, classification results of four traffic events, which are normal (normal), accident (accident), violation (violation) and congestion (jam), respectively. As shown in fig. 5, the output results of the frame images input to the neural network, that is, the probabilities of various traffic events, are respectively displayed on the frame images labeled (r) - (c).
As shown in fig. 4 and 5, for the curve of PCA1 represented by the solid line, the value is higher in the regions (i), (iii), and (v), but is significantly reduced in the regions (i), (iv), and (v), and the traffic events with the highest probability in the corresponding frame images (i), (iv), (v), and (v) are all normal (normal), and the traffic events with the highest probability in the frame images (iv) and (v) are all accidents (accident). From the frame image r, it can be seen that the distance between the two vehicles is very close, and thus appears to be an accident. But the time for which the value of PCA1 remains low is short, indicating that the neural network with the LSTM layer, which detects that the two vehicles have left and thus adjusts to forget the information that the two vehicles just detected are very close to each other, takes into account the change of information over time, and determines that no accident has occurred. In the frame image II, the PCA1 value is reduced due to the meeting of the bicycle and the motorcycle, meanwhile, the prediction probability of the scene judged as an accident by the neural network is increased, but the scene is still judged to be normal finally because the meeting of the bicycle and the motorcycle is ended soon. It can be seen from the frame image (c), a collision occurs between the motorcycle and the truck, and the value of PCA1 is maintained at a lower value until the end of the video in the region (c), so that the neural network having the LSTM layer accurately judges the scene as an accident (accident) in the region (c). From the above analysis, it can be seen that the change in the value of PCA1 and the time that the value remains after the change determine whether the traffic event detection results change, and therefore, the curve of PCA1 can be interpreted as event recognition.
The curve for PCA2, shown in dashed lines, remained at a relatively constant value for most of the time. In the regions (c), (c) and (c), the value of the curve of PCA2 changes to a smaller extent, and particularly in the region (c), the value is very stable. The reason is that no moving vehicle or moving vehicle appears at the edge of the scene in the frame images in (i) and (iii) and the duration is short, but the vehicle is still for a long time in (ii), and the scene content is basically unchanged in the time periods. The large change of the PCA2 value in the second, fourth, fifth and sixth is caused by that a new moving object appears in the picture for a long time, but the PCA2 has not strong response to the new moving object, and the change of the whole image is more prominent. Thus, the curve of PCA2 can be interpreted as an overall representation of the image, which helps to understand the impact of emerging objects on the overall image.
For the curve of PCA3 represented by a dot-dash line, the values change sharply in the regions (i), (ii), (iii), (iv) and (iv) with a change width larger than that of PCA2, and these regions are all new moving objects. The new meeting of two vehicles can be seen from the frame image, the meeting of the bicycle and the motorcycle can be seen from the frame image, and the meeting of the motorcycle and the truck can be seen from the image. In the first, third and seventh aspects, the change range of the PCA3 value is small, and no new vehicles appear in the scenes. It can be seen that the change in the values of the curve for PCA3 represents the response of the LSTM layer to new motion objects, which affects the values of PCA1 and PCA 2. Based on the curve of the PCA3, the influence of the new moving object and whether the new moving object changes the detection result can be known.
It can be seen that the above process obtains intermediate information that can be interpreted as "event recognition", "image overall representation", and "new moving object", for example, and based on these intermediate information, the data flow in the neural network can be tracked and observed, so that the principle of the neural network can be essentially analyzed.
It can be seen from the above embodiments that the hidden state output by the long and short term memory layer in the neural network is simplified by the dimension reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network, and the output result of the neural network, and the principle of the neural network can be essentially analyzed according to the intermediate information, so that optimization and improvement can be performed for various applications of the neural network having the long and short term memory layer.
Example 2
The embodiment of the invention also provides an optimization device of the neural network, and the neural network is provided with a long-term and short-term memory layer. Fig. 6 is a schematic diagram of an optimization apparatus of a neural network according to embodiment 2 of the present invention. As shown in fig. 6, the optimizing apparatus 600 includes:
an intermediate information analyzing device 601 of the neural network, which is used for outputting the intermediate information of the neural network;
a second analysis unit 602, configured to analyze, according to the intermediate information of the neural network, a change rule of the intermediate information caused by a change in input data of the neural network, and an influence of the change in the intermediate information on an output result of the neural network;
an optimizing unit 603, configured to optimize a parameter of the neural network according to a change rule of the intermediate information and an influence of a change of the intermediate information on an output result of the neural network.
In this embodiment, the structure of the neural network is the same as that described in embodiment 1, and for example, it has a structure as shown in fig. 2.
In this embodiment, the structure and function of the neural network intermediate information analyzer 601 are the same as those described in embodiment 1, and are not described again here.
In this embodiment, after the intermediate information analysis device 601 of the neural network analyzes and obtains the intermediate information of the neural network, the second analysis unit 602 analyzes a change rule of the intermediate information caused by a change of the input data of the neural network and an influence of the change of the intermediate information on an output result of the neural network according to the intermediate information of the neural network.
For example, the LSTM layer may respond to changes in input data by masking areas of a frame of image at certain points in time of the traffic monitoring video input to the neural network or manually adding new objects to change the input data to the neural network, and then observing changes in the simplified hidden state and changes in the output results of the neural network.
In this embodiment, the optimization unit 603 optimizes the parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
For example, the parameters of the neural network are optimized such that the simplified hidden state is sensitive to changes in the useful information, i.e. the output of the LSTM layer is sensitive to changes in the useful information. For example, when the response of the simplified hidden state lags behind the change in traffic events, the number of LSTM cells may be increased to enhance memory.
According to the embodiment, the hidden state output by the long and short term memory layer in the neural network is simplified through the dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network and the output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; and parameters of the neural network are optimized according to the intermediate information, so that the application effect of the neural network can be effectively improved.
Example 3
The embodiment of the invention also provides a characteristic visualization device of the neural network, and the neural network is provided with a long-term and short-term memory layer. Fig. 7 is a schematic diagram of a feature visualization device of a neural network according to embodiment 3 of the present invention. As shown in fig. 7, the feature visualization device 700 includes:
an intermediate information analyzing device 701 of a neural network, for outputting intermediate information of the neural network;
a visualization unit 702, configured to visually represent the physical meaning of the feature included in the hidden state output by the long-term and short-term memory layer in the neural network and the change of the feature over time according to the intermediate information of the neural network.
In this embodiment, the structure of the neural network is the same as that described in embodiment 1, and for example, it has a structure as shown in fig. 2.
In this embodiment, the structure and function of the neural network intermediate information analyzer 701 are the same as those described in embodiment 1, and are not described again here.
In this embodiment, after the intermediate information analysis device 701 of the neural network analyzes and obtains the intermediate information of the neural network, the visualization unit 702 represents the physical meaning of the feature included in the hidden state output by the long-term and short-term memory layer in the neural network and the change of the feature over time in a visualized form according to the intermediate information of the neural network.
For example, as described in embodiment 1, the hidden state may include features that can be interpreted as "event recognition", "image overall expression", or "new moving object", and the visualization unit 702 may create a curve of the time-dependent change of various features as shown in fig. 4, for example, to visualize the features.
According to the embodiment, the hidden state output by the long and short term memory layer in the neural network is simplified through the dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network and the output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; the characteristics contained in the hidden state can be visualized according to the intermediate information, and research and learning on a neural network with a long and short term memory layer are facilitated.
Example 4
An embodiment of the present invention further provides an electronic device, and fig. 8 is a schematic diagram of an electronic device according to embodiment 4 of the present invention. As shown in fig. 8, the electronic device 800 includes an intermediate information analysis apparatus 801 of a neural network having a long-term and short-term memory layer, and the structure and function of the intermediate information analysis apparatus 801 of the neural network are the same as those described in embodiment 1, and are not described again here.
Fig. 9 is a schematic block diagram of a system configuration of an electronic apparatus according to embodiment 4 of the present invention. As shown in fig. 9, the electronic device 900 may include a central processor 901 and a memory 902; the memory 902 is coupled to the central processor 901. The figure is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
As shown in fig. 9, the electronic device 900 may further include: input unit 903, display 904, power supply 905.
In one embodiment, the function of the neural network intermediate information analyzing apparatus described in example 1 may be integrated into the central processor 901. The central processor 901 may be configured to: obtaining a hidden state output by the long-term and short-term memory layer in the neural network; simplifying the hidden state by using a dimension reduction algorithm; and analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network.
For example, the dimension reduction algorithm includes: a principal component analysis algorithm, an independent component analysis algorithm, or a partial least squares method.
For example, the dimension reduction algorithm is a principal component analysis algorithm, and the simplifying the hidden state by using the dimension reduction algorithm includes: subtracting the average value of the column of the element from each element in the matrix of the hidden state, and constructing a matrix according to each calculation result; calculating a covariance matrix of the constructed matrix; calculating an eigenvalue of the covariance matrix; calculating the feature vectors of the maximum preset number of feature values; and calculating a projection matrix according to the characteristic vector, and taking the projection matrix as a simplified hidden state.
For example, the data contribution rate of each column element of the projection matrix decreases column by column.
In another embodiment, the intermediate information analyzing apparatus of the neural network described in example 1 may be configured separately from the central processor 901, for example, the intermediate information analyzing apparatus of the neural network may be configured as a chip connected to the central processor 901, and the function of the intermediate information analyzing apparatus of the neural network is realized by the control of the central processor 901.
It is not necessary for the electronic device 900 to include all of the components shown in fig. 9 in this embodiment.
As shown in fig. 9, the central processor 901, sometimes referred to as a controller or operational control, may comprise a microprocessor or other processor device and/or logic device, the central processor 901 receiving input and controlling operation of the various components of the electronic device 900.
The memory 902, for example, may be one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. And the central processor 901 can execute the program stored in the memory 902 to realize information storage or processing, etc. The functions of other parts are similar to the prior art and are not described in detail here. The components of the electronic device 900 may be implemented in dedicated hardware, firmware, software, or combinations thereof, without departing from the scope of the invention.
It can be seen from the above embodiments that the hidden state output by the long and short term memory layer in the neural network is simplified by the dimension reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network, and the output result of the neural network, and the principle of the neural network can be essentially analyzed according to the intermediate information, so that optimization and improvement can be performed for various applications of the neural network having the long and short term memory layer.
Example 5
An embodiment of the present invention further provides an intermediate information analysis method for a neural network having a long-term and short-term memory layer, which corresponds to the intermediate information analysis device for the neural network of embodiment 1. Fig. 10 is a schematic diagram of an intermediate information analysis method of a neural network according to embodiment 5 of the present invention. As shown in fig. 10, the method includes:
step 1001: obtaining the hidden state of the long and short term memory layer output in the neural network;
step 1002: simplifying the hidden state by using a dimension reduction algorithm;
step 1003: and analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network.
In this embodiment, the specific implementation method of the above steps is the same as that described in embodiment 1, and is not repeated here.
It can be seen from the above embodiments that the hidden state output by the long and short term memory layer in the neural network is simplified by the dimension reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network, and the output result of the neural network, and the principle of the neural network can be essentially analyzed according to the intermediate information, so that optimization and improvement can be performed for various applications of the neural network having the long and short term memory layer.
Example 6
The embodiment of the invention also provides an optimization method of the neural network, wherein the neural network is provided with a long-term and short-term memory layer, and the method corresponds to the optimization device of the neural network in the embodiment 2. Fig. 11 is a schematic diagram of an optimization method of a neural network according to embodiment 6 of the present invention. As shown in fig. 11, the method includes:
step 1101: obtaining the hidden state of the long and short term memory layer output in the neural network;
step 1102: simplifying the hidden state by using a dimension reduction algorithm;
step 1103: analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network;
step 1104: analyzing the change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on the output result of the neural network according to the intermediate information of the neural network;
step 1105: and optimizing the parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
In this embodiment, the specific implementation method of the above steps is the same as that described in embodiments 1 and 2, and is not repeated here.
According to the embodiment, the hidden state output by the long and short term memory layer in the neural network is simplified through the dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network and the output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; and parameters of the neural network are optimized according to the intermediate information, so that the application effect of the neural network can be effectively improved.
Example 7
An embodiment of the present invention further provides a method for visualizing features of a neural network having a long-term and short-term memory layer, which corresponds to the apparatus for visualizing features of the neural network of embodiment 3. Fig. 12 is a schematic diagram of a feature visualization method of a neural network according to embodiment 7 of the present invention. As shown in fig. 12, the method includes:
step 1201: obtaining the hidden state of the long and short term memory layer output in the neural network;
step 1202: simplifying the hidden state by using a dimension reduction algorithm;
step 1203: analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network;
step 1204: according to the intermediate information of the neural network, the physical meaning of the characteristic contained in the hidden state output by the long-term and short-term memory layer in the neural network and the change situation of the characteristic with time are represented in a visualized form.
In this embodiment, the specific implementation method of the above steps is the same as that described in embodiments 1 and 3, and is not repeated here.
According to the embodiment, the hidden state output by the long and short term memory layer in the neural network is simplified through the dimensionality reduction algorithm, the intermediate information of the neural network is analyzed according to the simplified hidden state, the input data of the neural network and the output result of the neural network, and the principle of the neural network can be analyzed essentially according to the intermediate information, so that the optimization and the improvement can be carried out on various applications of the neural network with the long and short term memory layer; the characteristics contained in the hidden state can be visualized according to the intermediate information, and research and learning on a neural network with a long and short term memory layer are facilitated.
An embodiment of the present invention also provides a computer-readable program, wherein when the program is executed in an intermediate information analysis apparatus or an electronic device of a neural network, the program causes a computer to execute the intermediate information analysis method of the neural network described in embodiment 5 in the intermediate information analysis apparatus or the electronic device of the neural network.
An embodiment of the present invention further provides a storage medium storing a computer-readable program, where the computer-readable program enables a computer to execute the method for analyzing intermediate information of a neural network according to embodiment 5 in an intermediate information analyzing apparatus or an electronic device of the neural network.
The method for analyzing the intermediate information of the neural network performed in the intermediate information analyzing apparatus or the electronic device of the neural network described in connection with the embodiments of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of the two. For example, one or more of the functional block diagrams and/or one or more combinations of the functional block diagrams illustrated in fig. 1 may correspond to individual software modules of a computer program flow or may correspond to individual hardware modules. These software modules may correspond to the steps shown in fig. 10, respectively. These hardware modules may be implemented, for example, by solidifying these software modules using a Field Programmable Gate Array (FPGA).
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium; or the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The software module may be stored in the memory of the mobile terminal or in a memory card that is insertable into the mobile terminal. For example, if the apparatus (e.g., mobile terminal) employs a relatively large capacity MEGA-SIM card or a large capacity flash memory device, the software module may be stored in the MEGA-SIM card or the large capacity flash memory device.
One or more of the functional block diagrams and/or one or more combinations of the functional block diagrams described with respect to fig. 1 may be implemented as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any suitable combination thereof designed to perform the functions described herein. One or more of the functional block diagrams and/or one or more combinations of the functional block diagrams described with respect to fig. 1 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP communication, or any other such configuration.
While the invention has been described with reference to specific embodiments, it will be apparent to those skilled in the art that these descriptions are illustrative and not intended to limit the scope of the invention. Various modifications and alterations of this invention will become apparent to those skilled in the art based upon the spirit and principles of this invention, and such modifications and alterations are also within the scope of this invention.
With respect to the embodiments including the above embodiments, the following remarks are also disclosed:
supplementary note 1, an intermediate information analyzing apparatus of a neural network having a long-and-short-term memory layer, comprising:
an obtaining unit, configured to obtain a hidden state output by the long-term and short-term memory layer in the neural network;
a simplification unit for simplifying the hidden state using a dimension reduction algorithm;
a first analysis unit, configured to analyze the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network, and the output result of the neural network.
The apparatus according to claim 1, wherein,
the dimensionality reduction algorithm comprises the following steps: a principal component analysis algorithm, an independent component analysis algorithm, or a partial least squares method.
Supplementary note 3, the apparatus according to supplementary note 2, wherein the dimension reduction algorithm is a principal component analysis algorithm,
the simplification unit includes:
the building unit is used for subtracting the average value of the columns of the elements from each element in the matrix of the hidden state and building the matrix according to the calculation result;
a first calculation unit for calculating a covariance matrix of the constructed matrix;
a second calculation unit for calculating an eigenvalue of the covariance matrix;
a third calculation unit for calculating a maximum preset number of eigenvectors of the eigenvalues;
and the fourth calculation unit is used for calculating a projection matrix according to the characteristic vector and taking the projection matrix as the simplified hidden state.
Reference 4 discloses the apparatus according to reference 3, wherein,
the data contribution rate of each column element of the projection matrix decreases column by column.
Supplementary note 5, an optimizing apparatus of a neural network having a long-short term memory layer, the apparatus comprising:
an intermediate information analyzing device of the neural network according to supplementary note 1;
the second analysis unit is used for analyzing the change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on the output result of the neural network according to the intermediate information of the neural network;
and the optimization unit is used for optimizing the parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
Supplementary note 6, a feature visualization apparatus of a neural network having a long-short term memory layer, the apparatus comprising:
an intermediate information analyzing device of the neural network according to supplementary note 1;
a visualization unit, configured to visually represent the physical meaning of the feature included in the hidden state output by the long-short term memory layer in the neural network and the change of the feature over time according to the intermediate information of the neural network.
Supplementary note 7, an electronic device comprising the intermediate information analyzing apparatus of the neural network according to supplementary note 1.
Supplementary note 8, a method of intermediate information analysis of a neural network having a long-short term memory layer, the method comprising:
obtaining a hidden state output by the long-term and short-term memory layer in the neural network;
simplifying the hidden state by using a dimension reduction algorithm;
and analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network.
Supplementary note 9, the method according to supplementary note 8, wherein,
the dimensionality reduction algorithm comprises the following steps: a principal component analysis algorithm, an independent component analysis algorithm, or a partial least squares method.
Reference 10 to the method according to reference 9, wherein the dimension reduction algorithm is a principal component analysis algorithm,
the simplifying the hidden state by using the dimensionality reduction algorithm comprises the following steps:
subtracting the average value of the column of the element from each element in the matrix of the hidden state, and establishing a matrix according to the calculation result;
calculating a covariance matrix of the constructed matrix;
calculating an eigenvalue of the covariance matrix;
calculating the feature vectors of the maximum preset number of feature values;
and calculating a projection matrix according to the characteristic vector, and taking the projection matrix as the simplified hidden state.
Supplementary note 11 and the method according to supplementary note 10, wherein,
the data contribution rate of each column element of the projection matrix decreases column by column.
Supplementary note 12, a method of optimizing a neural network having a long-short term memory layer, the method comprising:
obtaining a hidden state output by the long-term and short-term memory layer in the neural network;
simplifying the hidden state by using a dimension reduction algorithm;
analyzing intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network;
analyzing a change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on an output result of the neural network according to the intermediate information of the neural network;
and optimizing parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
Supplementary note 13, a method of feature visualization for a neural network having a long-short term memory layer, the method comprising:
obtaining a hidden state output by the long-term and short-term memory layer in the neural network;
simplifying the hidden state by using a dimension reduction algorithm;
analyzing intermediate information of the neural network according to the simplified hidden state, the input data of the neural network and the output result of the neural network;
according to the intermediate information of the neural network, the physical meaning of the features contained in the hidden state output by the long-short term memory layer in the neural network and the change situation of the features with time are represented in a visualized form.

Claims (6)

1. An intermediate information analysis apparatus for a neural network for detecting traffic events, the neural network having a long-short term memory layer, the apparatus comprising:
an obtaining unit, configured to obtain a hidden state output by the long-term and short-term memory layer in the neural network;
a simplification unit for simplifying the hidden state using a dimension reduction algorithm;
a first analyzing unit for analyzing the intermediate information of the neural network according to the simplified hidden state, the input data of the neural network, and the output result of the neural network,
the input data includes a plurality of frames of traffic monitoring video arranged in a time sequence,
the output result comprises a classification result of the traffic incident;
the hidden state output by the long-short term memory layer in the neural network includes a feature having a physical meaning.
2. The apparatus of claim 1, wherein,
the dimensionality reduction algorithm comprises the following steps: a principal component analysis algorithm, an independent component analysis algorithm, or a partial least squares method.
3. The apparatus of claim 2, wherein the dimension reduction algorithm is a principal component analysis algorithm,
the simplification unit includes:
the building unit is used for subtracting the average value of the columns of the elements from each element in the matrix of the hidden state and building the matrix according to the calculation result;
a first calculation unit for calculating a covariance matrix of the constructed matrix;
a second calculation unit for calculating an eigenvalue of the covariance matrix;
a third calculation unit for calculating a maximum preset number of eigenvectors of the eigenvalues;
and the fourth calculation unit is used for calculating a projection matrix according to the characteristic vector and taking the projection matrix as the simplified hidden state.
4. The apparatus of claim 3, wherein,
the data contribution rate of each column element of the projection matrix decreases column by column.
5. An apparatus for optimizing a neural network having a long-short term memory layer, the apparatus comprising:
the intermediate information analyzing apparatus of a neural network according to claim 1;
the second analysis unit is used for analyzing the change rule of the intermediate information caused by the change of the input data of the neural network and the influence of the change of the intermediate information on the output result of the neural network according to the intermediate information of the neural network;
and the optimization unit is used for optimizing the parameters of the neural network according to the change rule of the intermediate information and the influence of the change of the intermediate information on the output result of the neural network.
6. An apparatus for visualizing features of a neural network having a long-short term memory layer, the apparatus comprising:
the intermediate information analyzing apparatus of a neural network according to claim 1;
a visualization unit, configured to visually represent the physical meaning of the feature included in the hidden state output by the long-short term memory layer in the neural network and the change of the feature over time according to the intermediate information of the neural network.
CN201710794559.XA 2017-09-06 2017-09-06 Intermediate information analysis device, optimization device, and feature visualization device for neural network Active CN109460812B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710794559.XA CN109460812B (en) 2017-09-06 2017-09-06 Intermediate information analysis device, optimization device, and feature visualization device for neural network
JP2018117650A JP7047620B2 (en) 2017-09-06 2018-06-21 Neural network intermediate information analyzer, optimizer and feature visualization device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710794559.XA CN109460812B (en) 2017-09-06 2017-09-06 Intermediate information analysis device, optimization device, and feature visualization device for neural network

Publications (2)

Publication Number Publication Date
CN109460812A CN109460812A (en) 2019-03-12
CN109460812B true CN109460812B (en) 2021-09-14

Family

ID=65605901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710794559.XA Active CN109460812B (en) 2017-09-06 2017-09-06 Intermediate information analysis device, optimization device, and feature visualization device for neural network

Country Status (2)

Country Link
JP (1) JP7047620B2 (en)
CN (1) CN109460812B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011555B (en) * 2021-02-09 2023-01-31 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN113963185A (en) * 2021-10-25 2022-01-21 上海交通大学 Visualization and quantitative analysis method and system for layer feature expression capability in neural network

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05342191A (en) * 1992-06-08 1993-12-24 Mitsubishi Electric Corp System for predicting and analyzing economic time sequential data
JPH08235145A (en) * 1995-02-24 1996-09-13 Toyota Motor Corp Method for determining structure of optimum neural network for application problem
US9620108B2 (en) * 2013-12-10 2017-04-11 Google Inc. Processing acoustic sequences using long short-term memory (LSTM) neural networks that include recurrent projection layers
CN104700828B (en) * 2015-03-19 2018-01-12 清华大学 The construction method of depth shot and long term memory Recognition with Recurrent Neural Network acoustic model based on selective attention principle
US9514391B2 (en) 2015-04-20 2016-12-06 Xerox Corporation Fisher vectors meet neural networks: a hybrid visual classification architecture
CN105844239B (en) * 2016-03-23 2019-03-29 北京邮电大学 It is a kind of that video detecting method is feared based on CNN and LSTM cruelly
CN106407889B (en) * 2016-08-26 2020-08-04 上海交通大学 Method for recognizing human body interaction in video based on optical flow graph deep learning model
CN106528858A (en) * 2016-11-29 2017-03-22 北京百度网讯科技有限公司 Lyrics generating method and device
CN106934352A (en) * 2017-02-28 2017-07-07 华南理工大学 A kind of video presentation method based on two-way fractal net work and LSTM
CN107092894A (en) * 2017-04-28 2017-08-25 孙恩泽 A kind of motor behavior recognition methods based on LSTM models

Also Published As

Publication number Publication date
JP2019046453A (en) 2019-03-22
JP7047620B2 (en) 2022-04-05
CN109460812A (en) 2019-03-12

Similar Documents

Publication Publication Date Title
CN107529650B (en) Closed loop detection method and device and computer equipment
US9159137B2 (en) Probabilistic neural network based moving object detection method and an apparatus using the same
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
Huang et al. Automatic moving object extraction through a real-world variable-bandwidth network for traffic monitoring systems
CN111008600B (en) Lane line detection method
CN111491093B (en) Method and device for adjusting field angle of camera
CN109218695A (en) Video image enhancing method, device, analysis system and storage medium
EP2993621B1 (en) Method and apparatus for detecting shielding against object
CN111161307B (en) Image segmentation method and device, electronic equipment and storage medium
CN110147707B (en) High-precision vehicle identification method and system
TWI539407B (en) Moving object detection method and moving object detection apparatus
CN103475800B (en) Method and device for detecting foreground in image sequence
CN111488855A (en) Fatigue driving detection method, device, computer equipment and storage medium
CN112668480A (en) Head attitude angle detection method and device, electronic equipment and storage medium
CN111401196A (en) Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space
CN109460812B (en) Intermediate information analysis device, optimization device, and feature visualization device for neural network
CN110555439A (en) identification recognition method, training method and device of model thereof and electronic system
CN111814776B (en) Image processing method, device, server and storage medium
TWI512685B (en) Method and apparatus for moving object detection
CN109815902B (en) Method, device and equipment for acquiring pedestrian attribute region information
CN112446241A (en) Method and device for obtaining characteristic information of target object and electronic equipment
CN114155278A (en) Target tracking and related model training method, related device, equipment and medium
Buemi et al. Efficient fire detection using fuzzy logic
CN110086860B (en) Data anomaly detection method and device under Internet of things big data environment
US10803580B2 (en) Video image processing and motion detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant