CN110363086A - Diagram data recognition methods, device, computer equipment and storage medium - Google Patents

Diagram data recognition methods, device, computer equipment and storage medium Download PDF

Info

Publication number
CN110363086A
CN110363086A CN201910503195.4A CN201910503195A CN110363086A CN 110363086 A CN110363086 A CN 110363086A CN 201910503195 A CN201910503195 A CN 201910503195A CN 110363086 A CN110363086 A CN 110363086A
Authority
CN
China
Prior art keywords
matrix
characteristic pattern
target
trained
convolutional layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910503195.4A
Other languages
Chinese (zh)
Inventor
张一帆
史磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Institute of Automation of Chinese Academy of Science
Original Assignee
Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences, Institute of Automation of Chinese Academy of Science filed Critical Nanjing Artificial Intelligence Chip Innovation Institute Institute Of Automation Chinese Academy Of Sciences
Priority to CN201910503195.4A priority Critical patent/CN110363086A/en
Publication of CN110363086A publication Critical patent/CN110363086A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

This application involves a kind of diagram data recognition methods, device, computer equipment and storage mediums.The described method includes: obtaining the input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained, input feature vector figure is the characteristic pattern obtained by extracting diagram data, obtain the bias matrix of current convolutional layer, wherein bias matrix is the matrix generated when generating the convolutional neural networks trained, it obtains and refers to adjacency matrix, calculate the sum for referring to adjacency matrix and bias matrix, obtain target adjacency matrix, obtain the convolution kernel of current convolutional layer, according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure generate target and export characteristic pattern, characteristic pattern is exported according to target, identify the corresponding recognition result of diagram data, characteristic pattern is exported according to target, identify the corresponding recognition result of diagram data.By increasing the matrix generated according to mission requirements to the fixation adjacency matrix in convolutional layer, the recognition accuracy for the convolutional neural networks trained is improved.

Description

Diagram data recognition methods, device, computer equipment and storage medium
Technical field
This application involves field of computer technology more particularly to a kind of diagram data recognition methods, device, computer equipment and Storage medium.
Background technique
In bone point data, human body is the key that by several coordinates for pre-defining artis in camera coordinates system Come what is indicated.It can easily pass through depth camera (such as Kinect) and various Attitude estimation algorithms (such as OpenPose it) obtains.Fig. 1 is the crucial artis of human body defined in Kinect depth camera.Human body is defined as 25 by it The three-dimensional coordinate of a key artis.As behavior often in the form of video existing for, so length is the row of T frame For that can be indicated with the tensor of Tx25x3.
It is the space-time diagram in one embodiment referring to Fig. 2, Fig. 2.Each artis is defined as the node of figure, between artis Physical connection be defined as the side of figure, and the side of time dimension is added between the same node of consecutive frame, obtaining one can To describe the space-time diagram of human body behavior.
The current common Activity recognition method based on skeleton point is picture scroll product.The common convolution operation of figure convolution sum is different, When doing convolution on the diagram, the neighbors number of each node be it is unfixed, and the parameter of convolution operation be it is fixed, in order to will The number of nodes that faces of the parameter of fixed quantity and indefinite quantity is mapped, and needs to define mapping function, be realized by mapping function The correspondence of parameter and node.Such as defining convolution kernel size is three, as shown in figure 3, three parameters are corresponded respectively to far from human body The point 001 of the heart, close to the point 002 and convolution point itself 003 of human body central point 000.Then convolution operation can use formula (1) table Show:
Wherein f is input and output characteristic tensor, and w is deconvolution parameter, and v is figure interior joint, and l represents reflecting between node and parameter Function is penetrated, Z is normalized function.In specific implementation, mapping function can be realized by the adjacency matrix of figure, pass through adjoining Shown in the convolution operation such as formula (2) that matrix indicates:
Wherein A represents the adjacency matrix of figure, and K is convolution kernel size, and Λ is for being normalized A.By with neighbour Matrix A multiplication is connect, node required for " screening " goes out from characteristic tensor is simultaneously multiplied with corresponding parameter.
Above by adjacency matrix indicate convolution operation when, adjacency matrix is defined for the human body in figure convolutional network The topological structure of figure.Human body attitude is varied, and fixed topological structure can not accurately describe each posture of human body, from And cause recognition accuracy low.
Summary of the invention
In order to solve the above-mentioned technical problem, this application provides a kind of diagram data recognition methods, device, computer equipment and Storage medium.
In a first aspect, this application provides a kind of diagram data recognition methods, comprising:
The input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained is obtained, input feature vector figure is to pass through Extract the characteristic pattern that diagram data obtains;
The bias matrix of current convolutional layer is obtained, wherein bias matrix is to generate when generating the convolutional neural networks trained Matrix;
It obtains and refers to adjacency matrix, calculate the sum for referring to adjacency matrix and bias matrix, obtain target adjacency matrix;
Obtain the convolution kernel of current convolutional layer;
Target, which is generated, according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure exports characteristic pattern, root Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data;
Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
Second aspect, this application provides a kind of diagram data identification devices, comprising:
Input feature vector figure obtains module, the input of the current convolutional layer for obtaining the convolutional neural networks that input has been trained Characteristic pattern, input feature vector figure are the characteristic patterns obtained by extracting diagram data;
Bias matrix obtains module, and for obtaining the bias matrix of current convolutional layer, wherein bias matrix is to generate to have instructed The matrix generated when experienced convolutional neural networks;
Target adjacency matrix computing module refers to adjacency matrix for obtaining, calculates and refer to adjacency matrix and bias matrix Sum, obtain target adjacency matrix;
Convolution kernel obtains module, for obtaining the convolution kernel of current convolutional layer;
Characteristic pattern generation module, for raw according to the convolution kernel, target adjacency matrix and input feature vector figure of current convolutional layer Characteristic pattern is exported at target, characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data;
Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
A kind of computer equipment can be run on a memory and on a processor including memory, processor and storage Computer program, the processor perform the steps of when executing the computer program
The input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained is obtained, input feature vector figure is to pass through Extract the characteristic pattern that diagram data obtains;
The bias matrix of current convolutional layer is obtained, wherein bias matrix is to generate when generating the convolutional neural networks trained Matrix;
It obtains and refers to adjacency matrix, calculate the sum for referring to adjacency matrix and bias matrix, obtain target adjacency matrix;
Obtain the convolution kernel of current convolutional layer;
Target, which is generated, according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure exports characteristic pattern, root Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data;
Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor It is performed the steps of when row
The input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained is obtained, input feature vector figure is to pass through Extract the characteristic pattern that diagram data obtains;
The bias matrix of current convolutional layer is obtained, wherein bias matrix is to generate when generating the convolutional neural networks trained Matrix;
It obtains and refers to adjacency matrix, calculate the sum for referring to adjacency matrix and bias matrix, obtain target adjacency matrix;
Obtain the convolution kernel of current convolutional layer;
Target, which is generated, according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure exports characteristic pattern, root Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data;
Characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
Above-mentioned diagram data recognition methods, device, computer equipment and storage medium, which comprises obtained input The input feature vector figure of the current convolutional layer of trained convolutional neural networks, input feature vector figure are the spies obtained by extracting diagram data Sign figure, obtains the bias matrix of current convolutional layer, and wherein bias matrix is to generate when generating the convolutional neural networks trained Matrix obtains and refers to adjacency matrix, calculates the sum for referring to adjacency matrix and bias matrix, obtains target adjacency matrix, acquisition is worked as The convolution kernel of preceding convolutional layer generates target output according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure Characteristic pattern exports characteristic pattern according to target, identifies the corresponding recognition result of diagram data.To in the convolutional neural networks trained Adjacency matrix in each convolutional layer increases bias matrix, and bias matrix is to obtain after the convolutional neural networks trained generate Matrix, human body attitude can preferably be expressed by increasing bias matrix, improve the accuracy for generating characteristic pattern, and then improve and trained Convolutional neural networks recognition accuracy.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, for those of ordinary skill in the art Speech, without any creative labor, is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram of the crucial artis of human body defined in Kinect depth camera in one embodiment;
Fig. 2 is the space-time diagram that human body behavior is described in one embodiment;
Fig. 3 is node schematic diagram defined in picture scroll product in one embodiment;
Fig. 4 is the applied environment figure of diagram data recognition methods in one embodiment;
Fig. 5 is the flow diagram of diagram data recognition methods in one embodiment;
Fig. 6 is the structural block diagram of diagram data identification device in one embodiment;
Fig. 7 is the internal structure chart of computer equipment in one embodiment.
Fig. 8 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the application, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall in the protection scope of this application.
Fig. 4 is the applied environment figure of diagram data recognition methods in one embodiment.Referring to Fig. 4, the diagram data recognition methods Applied to Activity recognition system.Behavior identifying system includes terminal 110 and server 120.Terminal 110 and server 120 are logical Cross network connection.Terminal or server obtain the input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained, Input feature vector figure is the characteristic pattern obtained by extracting diagram data, obtains the bias matrix of current convolutional layer, wherein bias matrix It to generate the matrix generated when the convolutional neural networks trained, obtains and refers to adjacency matrix, calculating is with reference to adjacency matrix and partially The sum for setting matrix obtains target adjacency matrix, obtains the convolution kernel of current convolutional layer, according to the convolution kernel of current convolutional layer, mesh It marks adjacency matrix and input feature vector figure generates target and exports characteristic pattern, characteristic pattern is exported according to target, identifies that diagram data is corresponding Recognition result.Terminal 110 specifically can be terminal console or mobile terminal, mobile terminal specifically can with mobile phone, tablet computer, At least one of laptop etc..Server 120 can use the clothes of the either multiple server compositions of independent server Device cluster be engaged in realize.
As shown in figure 5, in one embodiment, providing a kind of diagram data recognition methods.The present embodiment is mainly with the party Method is applied to the terminal 110 (or server 120) in above-mentioned Fig. 4 to illustrate.Referring to Fig. 2, diagram data recognition methods tool Body includes the following steps:
Step S201 obtains the input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained.
In this embodiment, input feature vector figure is the characteristic pattern obtained by extracting diagram data.
Specifically, the convolutional neural networks trained, which refer to, to be obtained by the training of the diagram data that largely carries label, Wherein diagram data is the space-time diagram of human body behavior, and space-time diagram is as shown in Figure 2.The label that diagram data carries includes the behavior of human body, It such as claps hands, jump, shake hands and fights personal behavior or more people's behaviors.The convolutional neural networks trained include multiple convolution Layer, current convolutional layer can be any one convolutional layer in convolutional neural networks.The output data of a upper convolutional layer is to work as The input data of preceding convolutional layer obtains input data of the output data in a upper convolutional layer as current convolutional layer, input Data are input feature vector figure.
Step S202 obtains the bias matrix of current convolutional layer.
In this embodiment, bias matrix is the matrix generated when generating the convolutional neural networks trained.
Step S203 is obtained and is referred to adjacency matrix, calculates the sum for referring to adjacency matrix and bias matrix, and it is adjacent to obtain target Matrix.
Specifically, bias matrix is the matrix for being adjusted to reference adjacency matrix, and bias matrix and reference abut The specific identical dimensional information of matrix.Bias matrix is the bias matrix obtained according to training demand, and different training demands is Refer to the demand after convolutional neural networks are trained for what to do, when such as clapping hands and fighting for identification for identification, what is obtained is inclined It is not identical to set matrix.Topological structure with reference to adjacency matrix for the human figure in figure convolutional network includes permanent moment for one The matrix of array element element.Bias matrix is the matrix generated according to the convolutional neural networks of generation trained, each convolutional layer pair Bias matrix in the matrix element of corresponding position can not be identical.The matrix element of bias matrix has been trained by generating What convolutional neural networks obtained.I.e. bias matrix is the network parameter generated in the convolutional neural networks trained, is rolled up in training During product neural network, the update of network parameter includes the update of bias matrix.It calculates and refers to adjacency matrix and bias matrix The sum of the matrix element of corresponding position obtains target adjacency matrix.Adjacency matrix is referred to by bias matrix adjustment, obtains target Adjacency matrix is described human body behavior using target adjacency matrix, obtains more accurate human body behavior description.
In one embodiment, the convolutional neural networks trained are generated, comprising: obtain comprising multiple trained diagram datas Training set, each trained diagram data includes corresponding label information, by initial convolutional neural networks to each trained figure number According to being identified, obtain corresponding recognition result, according to default loss function calculate the recognition result of each trained diagram data with The penalty values of label, when penalty values are less than or equal to default poor penalty values, the convolutional neural networks trained.
Specifically, training diagram data is the space-time diagram of acquisition.Label information is for identifying trained diagram data.Label information packet Include human body behavior label, figure number etc..Each trained diagram data is inputted in initial convolutional neural networks, initial convolution is passed through Neural network identifies to obtain corresponding recognition result according to the feature of extraction, according to default damage to training diagram data feature extraction It loses function and calculates the recognition result of each diagram data and the penalty values of label information, determine initial convolution nerve net according to penalty values Whether network restrains.Wherein presetting loss function is pre-set for calculating the function of the penalty values of network, the loss function The loss function of common network can be used.When penalty values are less than or equal to default penalty values, initial convolutional neural networks Convergence, the convolutional neural networks trained.
In one embodiment, when penalty values are greater than default penalty values, algorithm is returned by gradient and returns penalty values, more The network parameter of new initial convolutional neural networks, using having updated the initial convolutional neural networks of network parameter to a trained figure number According to being identified, corresponding recognition result is obtained, until the penalty values between recognition result and label information are less than default loss When value, the convolutional neural networks trained.
Specifically, gradient passback algorithm is to chase after the method that layer is updated network parameter using the method for gradient decline. Penalty values are preset when penalty values are greater than, indicates that initial convolutional neural networks are not converged, needs to update network parameter.Network parameter Diversity factor before update is output result and true output result according to the last layer determines, i.e., returns according to by gradient Propagation algorithm returns penalty values, and the network parameter of initial convolutional neural networks is successively updated according to the penalty values of passback.To having updated The initial convolutional neural networks of network parameter again identify that training diagram data, according to recognition result and label information Penalty values determine whether network restrains again, do not restrain, and continue to update network parameter, until network convergence, has been trained Convolutional neural networks.
In one embodiment, initial convolution neural network model includes at least one convolutional layer, includes just in convolutional layer Beginning bias matrix returns algorithm by gradient and returns diversity factor, update the network parameter of initial convolutional neural networks, comprising: is logical Gradient passback algorithm is crossed when penalty values are passed back to any one convolutional layer, the return value of each convolutional layer is obtained, according to each The return value of convolutional layer updates the parameter of bias matrix.
Specifically, default loss function is preconfigured for calculating the function of the penalty values of network, presets loss letter Number can be common loss function, such as cross entropy loss function and quadratic loss function.Gradient passback algorithm refer to by Variance data between wrong identification result and true tag is successively transmitted down from the final output layer of convolutional neural networks, often One layer according to variance data, i.e. the return value update that carries out network parameter, when passing back to any one convolutional layer, according to the volume The return value of lamination updates network parameter, and wherein network parameter includes the parameter of bias matrix.
Step S204 obtains the convolution kernel of current convolutional layer.
Specifically, each convolutional layer includes multiple convolution kernels, the corresponding convolution nuclear volume of each convolutional layer can it is identical or Difference, each convolution kernel can be identical or not identical.Convolution kernel is for carrying out convolution algorithm, different convolution to image Core can extract different characteristics of image.
Step S205 generates target output according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure Characteristic pattern.
Specifically, feature extraction is carried out to input feature vector figure using target adjacency matrix, obtains characteristic pattern.Using convolution kernel Convolution algorithm is carried out to characteristic pattern, obtains convolution characteristic pattern, exports characteristic pattern for convolution characteristic pattern as target.Using target neighbour It connects matrix and feature extraction is carried out to input feature vector figure, obtain more accurate characteristics of image.
In one embodiment, step S205, comprising:
Step S2051 remolds input feature vector figure, obtains remodeling characteristic pattern.
In this embodiment, the first dimension for remolding characteristic pattern is the first dimension and the second dimension of input feature vector figure Product.
Specifically, remodeling, which refers to, is adjusted input feature vector figure, so that the product of the first dimension and the second dimension is attached most importance to The first dimension of characteristic pattern is moulded, the input feature vector figure comprising three dimensions is such as adjusted to the remodeling characteristic pattern of 2 dimensions, it is assumed that Input feature vector figure is C × M × N, wherein the first dimension be C, the second dimension be M, third dimension N, then can remold figure be C × M × N, the first dimension are the product CM of C and M, and the second dimension is identical as the third dimension figure of input feature vector figure, keep entire input special The element for levying figure is constant, and total element is the product CMN of C, M and N.Wherein the first dimension C is channel element, and the second dimension M is input The line number of characteristic pattern, third dimension N are the columns of input feature vector figure.Wherein what N was represented is the quantity of human joint points, N is defined as 25 in Kinect.Carrying out remodeling to matrix is operation for convenience.
Step S2052 calculates the product of the matrix in each channel of remodeling characteristic pattern and target adjacency matrix, obtains each Second product matrix in channel.
Specifically, the first dimension phase of the matrix in the second dimension of characteristic pattern and each channel of target adjacency matrix is remolded It together, is C × M × N as remolded characteristic pattern, target adjacency matrix is C × N × N, and each access matrix is N × N, then remolds feature The product matrix of figure and the matrix in each channel is CMN.
Step S2053, bob-weight mould second product matrix in each channel, obtain the bob-weight modeling characteristic pattern in each channel.
Specifically, bob-weight modeling is the inverse process of remodeling, and if remodeling is that three-dimensional matrice is converted to two bit matrix, then bob-weight is moulded For two-dimensional matrix is converted to three-dimensional matrice, such as product matrix in each channel is CM × N, then the bob-weight modeling obtained after bob-weight modeling Characteristic pattern is C × M × N matrix.
Step S2054 carries out bob-weight modeling characteristic pattern according to the convolution kernel in each channel according to the convolution kernel in each channel Convolution algorithm obtains the target signature in each channel of current convolutional layer.
Step S2055 sums to the target signature in each channel, obtains the output characteristic pattern of current convolutional layer, Characteristic pattern is exported using the output characteristic pattern of current convolutional layer as target.
Specifically, by the corresponding convolution kernel in each channel, feature extraction is carried out to bob-weight modeling characteristic pattern, obtains each volume The corresponding feature of product core, the target signature in each channel is made of the feature that each convolution kernel extracts.Calculate each channel Target signature sum, i.e. the matrix element of corresponding position is added, and obtains output characteristic pattern, which is current volume The output characteristic pattern of lamination.
In one embodiment, step S205, further includes:
Step S2056 judges whether the port number for exporting characteristic pattern is consistent with the port number of input feature vector figure.
Step S2057, target when consistent, by input feature vector figure to the sum of output characteristic pattern, as current convolutional layer Export characteristic pattern.
Step S2058 carries out convolution algorithm to input feature vector figure when there is inconsistency, obtains the channel with output characteristic pattern The consistent convolution characteristic patterns of number, by convolution characteristic pattern and output characteristic pattern and, as target output characteristic pattern.
Specifically, each of characteristic pattern is exported according to what the convolution kernel in each channel and corresponding volume bob-weight modeling matrix generated Access matrix judges to export characteristic pattern is whether channel is consistent with the port number of input feature vector figure, when consistent, input feature vector figure The element of position corresponding with output feature is added, and target output characteristic pattern is obtained.When there is inconsistency, to input feature vector figure Convolution algorithm is carried out, the convolution characteristic pattern that there are same channels with output characteristic pattern is obtained, convolution characteristic pattern is calculated and output is special The sum for levying the element of figure same position obtains target output characteristic pattern.By judging input feature vector figure and exporting the logical of characteristic pattern Road number determines that target exports characteristic pattern according to port number, improves the accuracy of target output characteristic pattern.
Step S206 exports characteristic pattern according to target, identifies the corresponding recognition result of diagram data.
Specifically, the identification layer in convolutional neural networks target output characteristic pattern input trained, passes through identification layer It identifies the corresponding candidate recognition result of target output characteristic pattern, the maximum candidate of identification probability is selected to know from candidate recognition result Not as a result, as target identification as a result, using target identification result as the corresponding recognition result of diagram data.As identification types include Clap hands, jump, hand in hand three types when, wherein corresponding identification probability of clapping hands is 0.89, the corresponding identification probability that jumps is 0.01, when corresponding identification probability is 0.1 hand in hand, then the corresponding recognition result of diagram data is to clap hands.
In one embodiment, when including current convolutional layer in the convolutional neural networks trained, and current convolutional layer When next convolutional layer, using target output characteristic pattern as the input feature vector figure of next convolutional layer, using next convolutional layer as Current convolutional layer, into the input feature vector figure for the current convolutional layer for obtaining the convolutional neural networks that input has been trained, until having instructed When each convolutional layer in experienced convolutional neural networks is all completed, the target output characteristic pattern of the last one convolutional layer is exported, it will The target output characteristic pattern of the last one convolutional layer inputs identification layer, obtains the corresponding recognition result of books.With identical network The flow chart of data processing of the convolutional layer of structure is identical.
Above-mentioned diagram data recognition methods, comprising: obtain the defeated of the current convolutional layer for the convolutional neural networks that input has been trained Entering characteristic pattern, input feature vector figure is the characteristic pattern obtained by extracting diagram data, the bias matrix of current convolutional layer is obtained, wherein Bias matrix is the matrix generated when generating the convolutional neural networks trained, obtains and refers to adjacency matrix, is calculated with reference to adjoining The sum of matrix and bias matrix obtains target adjacency matrix, obtains the convolution kernel of current convolutional layer, according to the volume of current convolutional layer Product core, target adjacency matrix and input feature vector figure generate target and export characteristic pattern, export characteristic pattern according to target, identify figure number According to corresponding recognition result.Bias matrix is increased to the adjacency matrix in each convolutional layer in the convolutional neural networks trained, Bias matrix is the matrix obtained after the convolutional neural networks trained generate, which can preferably express human body appearance State improves the accuracy for generating characteristic pattern, and then improves the recognition accuracy for the convolutional neural networks trained.
In a specific embodiment, the method that eigenmatrix generates, comprising: referring to Fig. 6, Fig. 6 is one embodiment The flow chart of data processing schematic diagram of middle convolutional layer, wherein finFor the input feature vector figure of current convolutional layer, foutFor current convolutional layer Export characteristic pattern.It exports shown in specific representation such as formula (3) of the characteristic pattern using input feature vector figure:
Wherein, AkFor with reference to k-th of adjacency matrix in adjacency matrix, BkFor the kth adjacency matrix in bias matrix, WkFor k-th of parameter of convolution kernel, Kv is the size of convolution kernel, and the size of convolution kernel can be customized, and K such as can be setv=3 Or 5.Assuming that the dimension information of input feature vector figure is Cin× T × N, wherein C represents port number, and T represents the frame number of diagram data, N generation The articulation nodes number that table kinect is defined, wherein N=25.Input feature vector figure is remolded to obtain CinThe remodeling feature of T × N Figure, bias matrix BkFor the matrix obtained after training convolutional neural networks, BkWith AkIt is of the same size information, as N × N, Calculate bias matrix BkWith reference adjacency matrix AkSum, obtain the matrix in each channel of target adjacency matrix, it is adjacent to calculate target The matrix in each channel of matrix and the product of remodeling characteristic pattern are connect, obtains second product matrix in each channel, and to each logical Second product matrix in road carries out bob-weight modeling, obtains bob-weight modeling matrix, obtains a convolution kernel Wk, pass through the volume in each channel Product verification bob-weight modeling matrix carries out convolution algorithm, obtains the corresponding output characteristic pattern in each channel.Judge whether export characteristic pattern It is whether consistent with the port number of input feature vector figure, when there is inconsistency, by residual error network res, the wherein convolution in residual error network Core size is 1 × 1.The adjustment of input feature vector figure is caused to calculate adjusted defeated with the consistent matrix of port number of output characteristic pattern Enter characteristic pattern and export characteristic pattern and, obtain target export characteristic pattern.When consistent, input feature vector figure and output feature are calculated The sum of figure obtains target output characteristic pattern.Characteristic pattern is exported according to target to identify the behavior of each diagram data, is obtained pair The recognition result answered.
When the data handling procedure that the life process of features described above figure is training convolutional neural networks, and when each diagram data pair When classification in the label of the identification and diagram data answered is inconsistent, according to the corresponding loss of each diagram data of default loss function Value returns algorithm according to gradient and returns penalty values, obtains the return value of convolutional layer, update corresponding convolutional layer according to return value Parameter, the parameter of bias matrix B of the convolution kernel w in each channel.
When the data handling procedure of the convolutional neural networks trained after the life process of features described above figure is training, root According to the result of target output characteristic pattern identification as the corresponding recognition result of diagram data.The convolution mind that diagram data input has been trained Through network, convolutional neural networks include multiple convolutional layers and identification layer, and each convolutional layer includes convolution kernel and target adjacency matrix, Feature extraction is carried out to diagram data by the target adjacency matrix of each convolutional layer, corresponding characteristics of image set of graphs is obtained, leads to Cross convolution collecting image characteristic set carry out convolution algorithm, obtain each convolutional layer target output characteristic pattern, identification layer it is upper Input data of the target output characteristic pattern of one convolutional layer as identification layer, exports feature according to the target of each diagram data Figure, identifies corresponding human body behavior type.
Bias matrix is to be adapted to Activity recognition task according to what statistics of database came out in features described above figure generating process Figure, bias matrix can constantly update according to Classification Loss in the training process as the parameter of network, it is for each volume Lamination is all different.Since bias matrix is the matrix counted according to identification mission, thus according to bias matrix Obtained target adjacency matrix preferably can carry out feature extraction to input data, obtain accurate target output characteristic pattern, Therefore when being identified according to target output characteristic pattern, recognition result is more accurate.
Fig. 5 is the flow diagram of diagram data recognition methods in one embodiment.Although should be understood that the stream of Fig. 5 Each step in journey figure is successively shown according to the instruction of arrow, but these steps are not inevitable according to the suitable of arrow instruction Sequence successively executes.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these steps It can execute in other order.Moreover, at least part step in Fig. 5 may include multiple sub-steps or multiple ranks Section, these sub-steps or stage are not necessarily to execute completion in synchronization, but can execute at different times, this The execution sequence in a little step perhaps stage be also not necessarily successively carry out but can be with other steps or other steps Sub-step or at least part in stage execute in turn or alternately.
In one embodiment, as shown in fig. 7, providing a kind of diagram data identification device 200, comprising:
Input feature vector figure obtains module 201, inputs the current convolutional layer of convolutional neural networks trained for obtaining Input feature vector figure, input feature vector figure are the characteristic patterns obtained by extracting diagram data.
Bias matrix obtains module 202, and for obtaining the bias matrix of current convolutional layer, wherein bias matrix is to generate The matrix generated when trained convolutional neural networks.
Target adjacency matrix computing module 203 refers to adjacency matrix for obtaining, and calculates with reference to adjacency matrix and biasing square The sum of battle array, obtains target adjacency matrix.
Convolution kernel obtains module 204, for obtaining the convolution kernel of current convolutional layer.
Characteristic pattern generation module 205, for the convolution kernel, target adjacency matrix and input feature vector figure according to current convolutional layer It generates target and exports characteristic pattern, characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
Identification module 206 identifies the corresponding recognition result of diagram data for exporting characteristic pattern according to target.
In one embodiment, target adjacency matrix computing module is also used to carry out the input feature vector figure of current convolutional layer Dimensionality reduction obtains dimensionality reduction matrix, normalizes dimensionality reduction matrix, normalization matrix is obtained, by each element and target of normalization matrix The corresponding element of adjacency matrix is added, and updated target adjacency matrix is obtained, using updated target adjacency matrix as mesh Mark adjacency matrix.
In one embodiment, above-mentioned target adjacency matrix computing module, comprising:
Dimensionality reduction unit, for being dropped respectively to the matrix in each channel of input feature vector figure according to the first dimensionality reduction function Dimension, obtains the corresponding fisrt feature figure in each channel, and input feature vector figure includes at least three dimensions, wherein the first dimension is logical Road number carries out dimensionality reduction to the matrix in each channel of the characteristic pattern respectively according to the second dimensionality reduction function, obtains each channel pair The second feature figure answered;
First matrix product computing unit, for calculate each channel fisrt feature figure and the second feature figure One product matrix, using each the first product matrix of channel as the matrix of dimensionality reduction matrix corresponding channel.
Normalization unit obtains normalization matrix corresponding channel for normalizing the matrix in each channel in dimensionality reduction matrix Matrix.
In one embodiment, above-mentioned diagram data identification device further include:
Network generation module, for generating the convolutional neural networks trained.Wherein network generation module includes:
Training data acquiring unit includes that training for multiple trained diagram datas is gathered for obtaining, each trained diagram data Include corresponding label information.
Recognition unit obtains corresponding for being identified by initial convolutional neural networks to each trained diagram data Recognition result.
Diversity factor computing unit, for calculating the recognition result and label of each trained diagram data according to default loss function Penalty values.
Model generation unit, the convolution mind for having been trained when penalty values are less than or equal to default poor penalty values Through network.
In one embodiment, above-mentioned network generation module, further includes:
Parameter updating unit, for returning algorithm by gradient and returning penalty values when penalty values are greater than default penalty values, Update the network parameter of initial convolutional neural networks.
Model determination unit is also used to use the initial convolutional neural networks for having updated network parameter to a trained diagram data It is identified, obtains corresponding recognition result, until the penalty values between recognition result and label information are less than default penalty values When, the convolutional neural networks trained.
In one embodiment, characteristic pattern generation module, comprising:
Characteristic pattern remolds unit, for remolding input feature vector figure, obtains remodeling characteristic pattern, remolds the first dimension of characteristic pattern For the first dimension of input feature vector figure and the product of the second dimension, wherein input feature vector figure includes at least three dimensions, the first dimension Degree is port number, and target adjacency matrix includes at least three dimensions.
Second matrix product computing unit, the matrix in each channel for calculating remodeling characteristic pattern and target adjacency matrix Product, obtain second product matrix in each channel.
Bob-weight moulds unit, and second product matrix in each channel is moulded for bob-weight, obtains the bob-weight modeling feature in each channel Figure.
Convolution unit moulds characteristic pattern to bob-weight according to the convolution kernel in each channel for the convolution kernel according to each channel Convolution algorithm is carried out, the target signature in each channel of current convolutional layer is obtained.
Characteristic pattern generation unit sums for the target signature to each channel, obtains current convolutional layer Output characteristic pattern, using the output characteristic pattern of current convolutional layer as target export characteristic pattern.
In being implemented at one, above-mentioned diagram data identification device, further includes:
Channel judgment module, for judging whether the port number for exporting characteristic pattern is consistent with the port number of input feature vector figure.
Characteristic pattern generation module is also used to when consistent, by input feature vector figure to the sum of output characteristic pattern, as current volume The target of lamination exports characteristic pattern, when there is inconsistency, carries out convolution algorithm to input feature vector figure, obtains logical with output characteristic pattern The consistent convolution characteristic pattern of road number, by convolution characteristic pattern and output characteristic pattern and, as target output characteristic pattern.
Fig. 8 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be Fig. 4 In terminal 110 (or server 120).As shown in figure 8, it includes total by system that the computer equipment, which includes the computer equipment, Processor, memory, network interface, input unit and the display screen of line connection.Wherein, memory includes that non-volatile memories are situated between Matter and built-in storage.The non-volatile memory medium of the computer equipment is stored with operating system, can also be stored with computer journey Sequence when the computer program is executed by processor, may make processor to realize diagram data recognition methods.It can also in the built-in storage Computer program is stored, when which is executed by processor, processor may make to execute diagram data recognition methods.Meter The display screen for calculating machine equipment can be liquid crystal display or electric ink display screen, and the input unit of computer equipment can be The touch layer covered on display screen is also possible to the key being arranged on computer equipment shell, trace ball or Trackpad, can be with It is external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Fig. 8, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, diagram data identification device provided by the present application can be implemented as a kind of shape of computer program Formula, computer program can be run in computer equipment as shown in Figure 8.Composition can be stored in the memory of computer equipment should Each program module of diagram data identification device, for example, characteristic pattern shown in Fig. 7 obtains module 201, bias matrix obtains module 202, adjacency matrix computing module 203 is marked, convolution kernel obtains module 204, characteristic pattern generation module 205 and identification module 206. The computer program that each program module is constituted makes processor execute each embodiment of the application described in this specification Step in diagram data recognition methods.
For example, computer equipment shown in Fig. 8 can pass through the input feature vector in diagram data identification device as shown in Figure 7 Figure obtains module 201 and executes the input feature vector figure for obtaining the current convolutional layer for the convolutional neural networks that input has been trained, and input is special Sign figure is the characteristic pattern obtained by extracting diagram data.Computer equipment can obtain module 202 by bias matrix and execute acquisition The bias matrix of current convolutional layer, wherein bias matrix is the matrix generated when generating the convolutional neural networks trained.It calculates Machine equipment, which can execute to obtain by target adjacency matrix computing module 203, refers to adjacency matrix, and calculating is with reference to adjacency matrix and partially The sum for setting matrix obtains target adjacency matrix.Computer equipment can be obtained module 204 by convolution kernel and execute the current convolution of acquisition The convolution kernel of layer.It is adjacent that computer equipment can execute the convolution kernel according to current convolutional layer, target by characteristic pattern generation module 205 It connects matrix and input feature vector figure generates target output characteristic pattern.Computer equipment can be executed by identification module 206 according to target Characteristic pattern is exported, identifies the corresponding recognition result of diagram data.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor have performed the steps of acquisition input when executing computer program The input feature vector figure of the current convolutional layer of trained convolutional neural networks, input feature vector figure are the spies obtained by extracting diagram data Sign figure, obtains the bias matrix of current convolutional layer, and wherein bias matrix is to generate when generating the convolutional neural networks trained Matrix obtains and refers to adjacency matrix, calculates the sum for referring to adjacency matrix and the bias matrix, obtains target adjacency matrix, obtain The convolution kernel for taking current convolutional layer generates target according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure Characteristic pattern is exported, characteristic pattern is exported according to target, identifies the corresponding recognition result of diagram data.
In one embodiment, the input of current convolutional layer is also performed the steps of when processor executes computer program Characteristic pattern carries out dimensionality reduction, obtains dimensionality reduction matrix, normalizes dimensionality reduction matrix, normalization matrix is obtained, by each of normalization matrix Element element corresponding with target adjacency matrix is added, and obtains updated target adjacency matrix, and updated target is adjacent Matrix is as target adjacency matrix.
In one embodiment, input feature vector figure includes at least three dimensions, wherein the first dimension is port number, comprising: Dimensionality reduction is carried out to the matrix in each channel of input feature vector figure respectively according to the first dimensionality reduction function, obtains each channel corresponding the One characteristic pattern carries out dimensionality reduction to the matrix in each channel of characteristic pattern respectively according to the second dimensionality reduction function, obtains each channel pair The second feature figure answered calculates the fisrt feature figure in each channel and the first product matrix of second feature figure, by each channel Matrix of first product matrix as dimensionality reduction matrix corresponding channel normalizes the matrix in each channel in dimensionality reduction matrix, is returned One changes the matrix of matrix corresponding channel.
In one embodiment, the convolution mind trained also is performed the steps of into when processor executes computer program Through network, comprising: obtain the training set comprising multiple trained diagram datas, each trained diagram data is believed comprising corresponding label Breath, identifies each trained diagram data by initial convolutional neural networks, corresponding recognition result is obtained, according to default damage It loses function and calculates the recognition result of each trained diagram data and the penalty values of label, lost when penalty values are less than or equal to default differential loss When value, the convolutional neural networks trained.
In one embodiment, it also performs the steps of when processor executes computer program and is preset when penalty values are greater than When penalty values, algorithm is returned by gradient and returns penalty values, the network parameter of initial convolutional neural networks is updated, using having updated The initial convolutional neural networks of network parameter identify a trained diagram data, obtain corresponding recognition result, until identification As a result when the penalty values between label information are less than default penalty values, the convolutional neural networks trained.
In one embodiment, input feature vector figure includes at least three dimensions, wherein the first dimension is port number, target Adjacency matrix includes at least three dimensions, is generated according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure Target exports characteristic pattern, comprising: remodeling input feature vector figure obtains remodeling characteristic pattern, remolds the first dimension of characteristic pattern as input First dimension of characteristic pattern and the product of the second dimension calculate the matrix in each channel of remodeling characteristic pattern and target adjacency matrix Product, obtain second product matrix in each channel, bob-weight moulds second product matrix in each channel, obtains each channel Bob-weight moulds characteristic pattern according to the convolution kernel in each channel, carries out convolution fortune to bob-weight modeling characteristic pattern according to the convolution kernel in each channel It calculates, obtains the target signature in each channel of current convolutional layer, sum to the target signature in each channel, obtain current The output characteristic pattern of current convolutional layer is exported characteristic pattern by the output characteristic pattern of convolutional layer.
In one embodiment, judgement output characteristic pattern is also performed the steps of when processor executes computer program Whether port number consistent with the port number of input feature vector figure, when consistent, by input feature vector figure to output characteristic pattern sum, as The target of current convolutional layer exports characteristic pattern, when there is inconsistency, carries out convolution algorithm to input feature vector figure, obtains and output feature The consistent convolution characteristic pattern of the port number of figure, by convolution characteristic pattern and output characteristic pattern and, as target output characteristic pattern.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of the defeated of the current convolutional layer for obtaining the convolutional neural networks that input has been trained when being executed by processor Entering characteristic pattern, input feature vector figure is the characteristic pattern obtained by extracting diagram data, the bias matrix of current convolutional layer is obtained, wherein Bias matrix is the matrix generated when generating the convolutional neural networks trained, obtains and refers to adjacency matrix, is calculated with reference to adjoining The sum of matrix and the bias matrix obtains target adjacency matrix, obtains the convolution kernel of current convolutional layer, according to current convolutional layer Convolution kernel, target adjacency matrix and input feature vector figure generate target and export characteristic pattern, characteristic pattern is exported according to target, is identified The corresponding recognition result of diagram data.
In one embodiment, the defeated of current convolutional layer is also performed the steps of when computer program is executed by processor Enter characteristic pattern and carry out dimensionality reduction, obtain dimensionality reduction matrix, normalizes dimensionality reduction matrix, normalization matrix is obtained, by each of normalization matrix A element element corresponding with target adjacency matrix is added, and obtains updated target adjacency matrix, and updated target is adjacent Matrix is connect as target adjacency matrix.
In one embodiment, input feature vector figure includes at least three dimensions, wherein the first dimension is port number, comprising: Dimensionality reduction is carried out to the matrix in each channel of input feature vector figure respectively according to the first dimensionality reduction function, obtains each channel corresponding the One characteristic pattern carries out dimensionality reduction to the matrix in each channel of characteristic pattern respectively according to the second dimensionality reduction function, obtains each channel pair The second feature figure answered calculates the fisrt feature figure in each channel and the first product matrix of second feature figure, by each channel Matrix of first product matrix as dimensionality reduction matrix corresponding channel normalizes the matrix in each channel in dimensionality reduction matrix, is returned One changes the matrix of matrix corresponding channel.
In one embodiment, the convolution trained also is performed the steps of into when computer program is executed by processor Neural network, comprising: obtain the training set comprising multiple trained diagram datas, each trained diagram data is believed comprising corresponding label Breath, identifies each trained diagram data by initial convolutional neural networks, corresponding recognition result is obtained, according to default damage It loses function and calculates the recognition result of each trained diagram data and the penalty values of label, lost when penalty values are less than or equal to default differential loss When value, the convolutional neural networks trained.
In one embodiment, it is also performed the steps of when computer program is executed by processor when penalty values are greater than in advance If when penalty values, returning algorithm by gradient and returning penalty values, the network parameter of initial convolutional neural networks is updated, using update The initial convolutional neural networks of network parameter identify a trained diagram data, obtain corresponding recognition result, until knowing When other penalty values between result and label information are less than default penalty values, the convolutional neural networks trained.
In one embodiment, input feature vector figure includes at least three dimensions, wherein the first dimension is port number, target Adjacency matrix includes at least three dimensions, is generated according to the convolution kernel of current convolutional layer, target adjacency matrix and input feature vector figure Target exports characteristic pattern, comprising: remodeling input feature vector figure obtains remodeling characteristic pattern, remolds the first dimension of characteristic pattern as input First dimension of characteristic pattern and the product of the second dimension calculate the matrix in each channel of remodeling characteristic pattern and target adjacency matrix Product, obtain second product matrix in each channel, bob-weight moulds second product matrix in each channel, obtains each channel Bob-weight moulds characteristic pattern according to the convolution kernel in each channel, carries out convolution fortune to bob-weight modeling characteristic pattern according to the convolution kernel in each channel It calculates, obtains the target signature in each channel of current convolutional layer, sum to the target signature in each channel, obtain current The output characteristic pattern of current convolutional layer is exported characteristic pattern by the output characteristic pattern of convolutional layer.
In one embodiment, judgement output characteristic pattern is also performed the steps of when computer program is executed by processor Port number it is whether consistent with the port number of input feature vector figure, when consistent, by input feature vector figure to output characteristic pattern sum, work Characteristic pattern is exported for the target of current convolutional layer, when there is inconsistency, convolution algorithm is carried out to input feature vector figure, is obtained special with output Levy figure the consistent convolution characteristic pattern of port number, by convolution characteristic pattern and output characteristic pattern and, as target output characteristic pattern.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It should be noted that, in this document, the relational terms of such as " first " and " second " or the like are used merely to one A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in the process, method, article or apparatus that includes the element.
The above is only a specific embodiment of the invention, is made skilled artisans appreciate that or realizing this hair It is bright.Various modifications to these embodiments will be apparent to one skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and applied principle and features of novelty phase one herein The widest scope of cause.

Claims (10)

1. a kind of diagram data recognition methods, which is characterized in that the described method includes:
The input feature vector figure of the current convolutional layer for the convolutional neural networks that input has been trained is obtained, the input feature vector figure is to pass through Extract the characteristic pattern that diagram data obtains;
The bias matrix of the current convolutional layer is obtained, wherein the bias matrix is to generate the convolutional Neural net trained The matrix generated when network;
It obtains and refers to adjacency matrix, calculate the sum with reference to adjacency matrix and the bias matrix, obtain target adjacency matrix;
Obtain the convolution kernel of the current convolutional layer;
It is special that target output is generated according to the convolution kernel of the current convolutional layer, the target adjacency matrix and the input feature vector figure Sign figure;
Characteristic pattern is exported according to the target, identifies the corresponding recognition result of the diagram data.
2. the method according to claim 1, wherein generating the convolutional neural networks trained, comprising:
The training set comprising multiple trained diagram datas is obtained, each trained diagram data includes corresponding label information;
Each trained diagram data is identified by initial convolutional neural networks, obtains corresponding recognition result;
The recognition result of each trained diagram data and the penalty values of the label are calculated according to default loss function;
When the penalty values are less than or equal to default poor penalty values, the convolutional neural networks trained are obtained.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
When the penalty values are greater than the default penalty values, algorithm is returned by the gradient and returns the diversity factor, is updated The network parameter of the initial convolutional neural networks;
A trained diagram data is identified using the initial convolutional neural networks for having updated network parameter, is obtained corresponding Recognition result, until being obtained when the penalty values between the recognition result and the label information are less than the default penalty values The convolutional neural networks trained.
4. according to the method described in claim 3, it is characterized in that, initial convolution neural network model includes at least one convolution Layer, it include initial bias matrix in the convolutional layer, the gradient passback algorithm that passes through returns the diversity factor, updates institute State the network parameter of initial convolutional neural networks, comprising:
When the penalty values being passed back to any one of convolutional layer by gradient passback algorithm, each volume is obtained The return value of lamination;
The network parameter of the convolutional layer is updated according to the return value of each convolutional layer, the network parameter includes described inclined Set the parameter of matrix.
5. method according to claim 1 to 4, which is characterized in that the input feature vector figure includes at least three A dimension, wherein the first dimension is port number, and the target adjacency matrix includes at least three dimensions, described to work as according to The convolution kernel of preceding convolutional layer, the target adjacency matrix and the input feature vector figure generate target and export characteristic pattern, comprising:
The input feature vector figure is remolded, obtains remodeling characteristic pattern, the first dimension of the remodeling characteristic pattern is the input feature vector First dimension of figure and the product of the second dimension;
The product for calculating the matrix in each channel of the remodeling characteristic pattern and the target adjacency matrix, obtains each channel Second product matrix;
Bob-weight moulds second product matrix in each channel, obtains the bob-weight modeling characteristic pattern in each channel;
According to the convolution kernel in each channel, convolution fortune is carried out to bob-weight modeling characteristic pattern according to the convolution kernel in each channel It calculates, obtains the target signature in each channel of current convolutional layer;
It sums to the target signature in each channel, obtains the output characteristic pattern of the current convolutional layer, it will be described The output characteristic pattern of current convolutional layer exports characteristic pattern as the target.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
Judge whether the port number of the output characteristic pattern is consistent with the port number of the input feature vector figure;
When consistent, by the input feature vector figure to the sum of the output characteristic pattern, the target as the current convolutional layer is defeated Characteristic pattern out;
When there is inconsistency, convolution algorithm is carried out to the input feature vector figure, obtained consistent with the output port number of characteristic pattern Convolution characteristic pattern, by the convolution characteristic pattern and it is described output characteristic pattern and, as the target output characteristic pattern.
7. a kind of diagram data identification device, which is characterized in that described device includes:
Input feature vector figure obtains module, the input feature vector of the current convolutional layer for obtaining the convolutional neural networks that input has been trained Figure, the input feature vector figure are the characteristic patterns obtained by extracting diagram data;
Bias matrix obtains module, for obtaining the bias matrix of the current convolutional layer, wherein the bias matrix is to generate The matrix generated when the convolutional neural networks trained;
Target adjacency matrix computing module refers to adjacency matrix for obtaining, and calculates described with reference to adjacency matrix and the biasing The sum of matrix obtains target adjacency matrix;
Convolution kernel obtains module, for obtaining the convolution kernel of the current convolutional layer;
Characteristic pattern generation module, for the convolution kernel, the target adjacency matrix and the input according to the current convolutional layer Characteristic pattern generates target and exports characteristic pattern;
Identification module identifies the corresponding recognition result of the diagram data for exporting characteristic pattern according to the target.
8. device according to claim 7, which is characterized in that described device further include:
Network generation module, for generating the convolutional neural networks trained.Wherein network generation module includes:
Data capture unit includes that training for multiple trained diagram datas is gathered for obtaining, and each trained diagram data includes Corresponding label information;
Recognition unit obtains corresponding for being identified by initial convolutional neural networks to each trained diagram data Recognition result;
Calculate loss value cell, for according to preset loss function calculate the recognition result of each trained diagram data with it is described The penalty values of label;
Network determination unit, for obtaining the volume trained when the penalty values are less than or equal to default poor penalty values Product neural network.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, which is characterized in that the processor realizes any one of claims 1 to 6 institute when executing the computer program The step of stating method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
CN201910503195.4A 2019-06-11 2019-06-11 Diagram data recognition methods, device, computer equipment and storage medium Pending CN110363086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910503195.4A CN110363086A (en) 2019-06-11 2019-06-11 Diagram data recognition methods, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910503195.4A CN110363086A (en) 2019-06-11 2019-06-11 Diagram data recognition methods, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110363086A true CN110363086A (en) 2019-10-22

Family

ID=68217259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910503195.4A Pending CN110363086A (en) 2019-06-11 2019-06-11 Diagram data recognition methods, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110363086A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325816A (en) * 2020-02-11 2020-06-23 重庆特斯联智慧科技股份有限公司 Feature map processing method and device, storage medium and terminal
CN111539526A (en) * 2020-04-24 2020-08-14 苏州浪潮智能科技有限公司 Neural network convolution method and device
CN111914029A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 Block chain-based medical data calling method and device, electronic equipment and medium
CN111967479A (en) * 2020-07-27 2020-11-20 广东工业大学 Image target identification method based on convolutional neural network idea
CN111986071A (en) * 2020-08-27 2020-11-24 苏州浪潮智能科技有限公司 Picture data processing method, device, equipment and storage medium
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN112116001A (en) * 2020-09-17 2020-12-22 苏州浪潮智能科技有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN112116066A (en) * 2020-08-27 2020-12-22 苏州浪潮智能科技有限公司 Neural network computing method, system, device and medium
CN115793490A (en) * 2023-02-06 2023-03-14 南通弈匠智能科技有限公司 Intelligent household energy-saving control method based on big data
CN117251715A (en) * 2023-11-17 2023-12-19 华芯程(杭州)科技有限公司 Layout measurement area screening method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169415A (en) * 2017-04-13 2017-09-15 西安电子科技大学 Human motion recognition method based on convolutional neural networks feature coding
US20180121760A1 (en) * 2016-10-27 2018-05-03 General Electric Company Methods of systems of generating virtual multi-dimensional models using image analysis
CN108304795A (en) * 2018-01-29 2018-07-20 清华大学 Human skeleton Activity recognition method and device based on deeply study
CN109086652A (en) * 2018-06-04 2018-12-25 平安科技(深圳)有限公司 Handwritten word model training method, Chinese characters recognition method, device, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121760A1 (en) * 2016-10-27 2018-05-03 General Electric Company Methods of systems of generating virtual multi-dimensional models using image analysis
CN107169415A (en) * 2017-04-13 2017-09-15 西安电子科技大学 Human motion recognition method based on convolutional neural networks feature coding
CN108304795A (en) * 2018-01-29 2018-07-20 清华大学 Human skeleton Activity recognition method and device based on deeply study
CN109086652A (en) * 2018-06-04 2018-12-25 平安科技(深圳)有限公司 Handwritten word model training method, Chinese characters recognition method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEI SHI 等: "Non-Local Graph Convolutional Networks for Skeleton-Based Action Recognition", 《ARXIV》 *
丰艳 等: "基于时空注意力深度网络的视角无关性骨架行为识别", 《计算机辅助设计与图形学学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248581A1 (en) * 2019-06-11 2020-12-17 中国科学院自动化研究所 Graph data identification method and apparatus, computer device, and storage medium
CN111325816A (en) * 2020-02-11 2020-06-23 重庆特斯联智慧科技股份有限公司 Feature map processing method and device, storage medium and terminal
CN111539526B (en) * 2020-04-24 2022-12-06 苏州浪潮智能科技有限公司 Neural network convolution method and device
CN111539526A (en) * 2020-04-24 2020-08-14 苏州浪潮智能科技有限公司 Neural network convolution method and device
CN111967479A (en) * 2020-07-27 2020-11-20 广东工业大学 Image target identification method based on convolutional neural network idea
CN111914029A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 Block chain-based medical data calling method and device, electronic equipment and medium
CN111986071A (en) * 2020-08-27 2020-11-24 苏州浪潮智能科技有限公司 Picture data processing method, device, equipment and storage medium
CN112116066B (en) * 2020-08-27 2022-12-20 苏州浪潮智能科技有限公司 Neural network computing method, system, device and medium
CN112116066A (en) * 2020-08-27 2020-12-22 苏州浪潮智能科技有限公司 Neural network computing method, system, device and medium
CN111986071B (en) * 2020-08-27 2022-11-29 苏州浪潮智能科技有限公司 Picture data processing method, device, equipment and storage medium
CN112116001B (en) * 2020-09-17 2022-06-07 苏州浪潮智能科技有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN112116001A (en) * 2020-09-17 2020-12-22 苏州浪潮智能科技有限公司 Image recognition method, image recognition device and computer-readable storage medium
CN115793490A (en) * 2023-02-06 2023-03-14 南通弈匠智能科技有限公司 Intelligent household energy-saving control method based on big data
CN115793490B (en) * 2023-02-06 2023-04-11 南通弈匠智能科技有限公司 Intelligent household energy-saving control method based on big data
CN117251715A (en) * 2023-11-17 2023-12-19 华芯程(杭州)科技有限公司 Layout measurement area screening method and device, electronic equipment and storage medium
CN117251715B (en) * 2023-11-17 2024-03-19 华芯程(杭州)科技有限公司 Layout measurement area screening method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110378372A (en) Diagram data recognition methods, device, computer equipment and storage medium
CN110363086A (en) Diagram data recognition methods, device, computer equipment and storage medium
CN106778928B (en) Image processing method and device
Mao et al. Multi-level motion attention for human motion prediction
Yan et al. Ranking with uncertain labels
CN110390259A (en) Recognition methods, device, computer equipment and the storage medium of diagram data
CN109086711B (en) Face feature analysis method and device, computer equipment and storage medium
CN110517278A (en) Image segmentation and the training method of image segmentation network, device and computer equipment
CN110705448A (en) Human body detection method and device
CN112955883B (en) Application recommendation method and device, server and computer-readable storage medium
CN109753891A (en) Football player's orientation calibration method and system based on human body critical point detection
CN109784474A (en) A kind of deep learning model compression method, apparatus, storage medium and terminal device
CN110751039B (en) Multi-view 3D human body posture estimation method and related device
CN107016319A (en) A kind of key point localization method and device
CN109934065A (en) A kind of method and apparatus for gesture identification
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
CN110942012A (en) Image feature extraction method, pedestrian re-identification method, device and computer equipment
CN108805058A (en) Target object changes gesture recognition method, device and computer equipment
CN112801215A (en) Image processing model search, image processing method, image processing apparatus, and storage medium
US20200327726A1 (en) Method of Generating 3D Facial Model for an Avatar and Related Device
CN109446952A (en) A kind of piano measure of supervision, device, computer equipment and storage medium
CN110378213A (en) Activity recognition method, apparatus, computer equipment and storage medium
CN110147833A (en) Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
CN112116589A (en) Method, device and equipment for evaluating virtual image and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191022

RJ01 Rejection of invention patent application after publication