CN111476115B - Human behavior recognition method, device and equipment - Google Patents
Human behavior recognition method, device and equipment Download PDFInfo
- Publication number
- CN111476115B CN111476115B CN202010209871.XA CN202010209871A CN111476115B CN 111476115 B CN111476115 B CN 111476115B CN 202010209871 A CN202010209871 A CN 202010209871A CN 111476115 B CN111476115 B CN 111476115B
- Authority
- CN
- China
- Prior art keywords
- node
- displacement vector
- feature
- attention
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The human behavior recognition method comprises the following steps: acquiring an image frame sequence corresponding to human behaviors, and determining a template frame in the image frame sequence; acquiring a sequencing strategy corresponding to the skeleton node according to the template frame; acquiring a first displacement vector set between skeleton nodes and other skeleton nodes in image frames in an image frame sequence and a second displacement vector set determined by two adjacent frames in the image frame sequence; according to the sorting strategy, sorting the first displacement vector set and the second displacement vector set corresponding to each skeleton node respectively to generate node feature blocks; and identifying the behavior category corresponding to the node characteristic block through the trained neural network model. Because the displacement vector in the node characteristic block comprises the direction characteristics of the nodes, actions in different directions can be effectively distinguished, and therefore the human behavior recognition precision can be improved.
Description
Technical Field
The application belongs to the field of artificial intelligence, and particularly relates to a human behavior recognition method, device and equipment.
Background
With the development of depth sensor technology, detection devices have been able to estimate key skeletal nodes of a human body from depth information. Since skeletal nodes are sufficient to express motion information during human movement, skeletal nodes can also be used for expression of human behavior. With respect to the complexity of the depth image, the skeleton node only contains the coordinate information of the key nodes of the human body, and the coordinate information of the skeleton node is not changed due to the change of the visual angle, so a plurality of behavior recognition methods based on the skeleton node are sequentially proposed.
In recent years, skeletal node behavior recognition technology based on convolutional neural network CNN has been widely developed. The most common treatment methods are: the skeletal node information is converted into the most common image for the depth model training to carry out the model training, but the conversion mode is easy to cause confusion of certain similar behaviors, and is not beneficial to improving the recognition accuracy of human behaviors.
Disclosure of Invention
In view of the above, the embodiment of the application provides a human behavior recognition method, device and equipment, so as to solve the problems that similar behaviors are easy to be confused and the human behavior recognition precision is not high in the prior art.
A first aspect of an embodiment of the present application provides a human behavior recognition method, including:
acquiring an image frame sequence corresponding to human behaviors, and determining a template frame in the image frame sequence;
acquiring a sequencing strategy corresponding to the skeleton node according to the template frame;
acquiring a first displacement vector set between a bone node and other bone nodes in image frames in an image frame sequence and a second displacement vector set determined by two adjacent frames in the image frame sequence, wherein the second displacement vector set comprises displacement vectors determined from the bone node in one of the two adjacent frames to the bone node in the other image frame;
According to the sorting strategy, sorting the first displacement vector set and the second displacement vector set corresponding to each skeleton node respectively to generate node feature blocks;
and identifying the behavior category corresponding to the node characteristic block through the trained neural network model.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of identifying, by using a trained neural network model, a behavior class corresponding to the node feature block includes:
inputting the node characteristic blocks into a trained first convolution model to obtain a first characteristic atlas corresponding to the node characteristic blocks;
inputting the node characteristic blocks into a trained second convolution model based on an attention mechanism, scoring the node characteristic blocks with action attention, and weighting the changed action flow according to the action attention score to obtain a second characteristic diagram;
and respectively fusing the feature images in the first feature image set with the second feature image, and classifying the human behaviors according to the fused third feature image.
With reference to the first aspect, in a second possible implementation manner of the first aspect, inputting the node feature block into a trained second convolution model based on an attention mechanism, performing action attention scoring on the node feature block, and weighting a changed action flow according to the action attention scoring, to obtain a second feature map includes:
Performing significance scoring on the node feature block based on a second convolution model of the attention mechanism;
directing spatial attention and temporal attention to all areas of the node feature block by saliency scoring;
performing action attention score calculation on the node feature blocks based on the spatial attention and the time attention; and weighting the node characteristic blocks corresponding to the changed action flow according to the action attention scores to obtain a second characteristic diagram.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the step of acquiring an image frame sequence corresponding to a human behavior includes:
acquiring an original image sequence of human body behaviors;
sampling the original image sequence according to a preset Gaussian distribution model;
and obtaining an image frame sequence corresponding to the human behavior through bilinear interpolation.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the step of acquiring, according to the template frame, a ranking policy corresponding to a skeletal node includes:
acquiring the distance between an ith bone node in a template frame and each bone node in the template frame, wherein i is any bone node in the template frame;
Sorting according to the distance, and determining the sorting strategy of the ith skeleton node according to the node sequence corresponding to the distance sorting result.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the step of generating the node feature block includes the steps of:
acquiring a start node and an end node corresponding to displacement vectors in a first displacement vector set, and a start node and an end node corresponding to displacement vectors in a second displacement vector set, wherein the start node is a public skeleton node in the first displacement vector set or the second displacement vector set;
determining a node sequence according to a sequencing strategy of the template frame, sequencing the end nodes in the first displacement vector set, and sequencing the end nodes in the second displacement vector set to obtain a displacement vector sequence corresponding to the end nodes;
and generating node characteristic blocks according to the displacement vector sequences respectively determined by a plurality of nodes in a single frame and the displacement vector sequences respectively determined by a plurality of nodes in two adjacent frames, wherein the number of skeleton nodes in the image frames and the number of image frames in the image frame sequence are included.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the method further includes:
acquiring sample data of human body behaviors, and a sample behavior type and a sample attention area corresponding to the sample data;
inputting the sample data of the human body line into a neural network model to obtain an attention recognition area output by a second convolution model and a behavior recognition type obtained by fusing the attention recognition area and a feature diagram output by a first convolution model;
optimizing parameters of the second convolution model according to the difference between the attention recognition area and the sample attention area until the difference between the attention recognition area and the sample attention area meets a preset requirement;
and optimizing parameters of the first convolution model according to the difference between the behavior recognition type and the sample behavior type until the difference between the behavior recognition type and the sample behavior type meets a preset requirement.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the sample data of human body behaviors includes a plurality of different behavior types of a plurality of users of different ages and a plurality of different behavior types of multiple users of different heights.
A second aspect of an embodiment of the present application provides a human behavior recognition apparatus, including:
the template frame determining unit is used for acquiring an image frame sequence corresponding to human behaviors and determining template frames in the image frame sequence;
the ordering strategy acquisition unit is used for acquiring an ordering strategy corresponding to the skeleton node according to the template frame;
a displacement vector acquisition unit, configured to acquire a first set of displacement vectors between a bone node in an image frame sequence and other bone nodes, and a second set of displacement vectors determined by two adjacent frames in the image frame sequence, where the second set of displacement vectors includes a displacement vector determined from a bone node in one of the two adjacent frames to a bone node in the other image frame;
the node characteristic block generating unit is used for respectively sequencing the first displacement vector set and the second displacement vector set corresponding to each skeleton node according to the sequencing strategy to generate a node characteristic block;
and the human body behavior recognition unit is used for recognizing the behavior category corresponding to the node characteristic block through the trained neural network model.
A third aspect of an embodiment of the present application provides a human behavior recognition device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the human behavior recognition method according to any one of the first aspects when the computer program is executed.
A fourth aspect of embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the human behavior recognition method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: after determining the ordering strategy through template frames in an image frame sequence, ordering a first displacement vector set calculated by skeleton nodes in frames of the image frames, and ordering a second displacement vector set determined by skeleton nodes of adjacent image frames to obtain node feature blocks, and recognizing human behaviors of the node feature blocks through a trained neural network model, wherein the displacement vectors in the node feature blocks comprise direction features of nodes, so that actions in different directions can be effectively distinguished. In addition, the attention mechanism adopted by the application weights the changed action flow on the basis of obtaining the three-dimensional block-shaped characteristic representing the action change relation of the previous and subsequent frames, further strengthens the significance characteristic, and simultaneously suppresses useless noise information, thereby further improving the accuracy of human behavior recognition.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of a human behavior recognition method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation flow of determining a ranking strategy according to an embodiment of the present application;
FIG. 3 is a schematic diagram of generating a block feature provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of an implementation of generating node feature blocks according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an implementation flow for classifying human behaviors according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation flow for training a neural network model according to an embodiment of the present application;
fig. 7 is a schematic diagram of a human behavior recognition device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a human behavior recognition apparatus according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Fig. 1 is a schematic implementation flow chart of a human behavior recognition method according to an embodiment of the present application, which is described in detail below:
in step S101, an image frame sequence corresponding to a human body behavior is acquired, and a template frame in the image frame sequence is determined.
Specifically, the human body behavior can determine the action type to be identified according to the application scene of human body identification. For example, in the road driving scene, when the human body behaviors of the traffic police are identified, the gesture type human body behaviors of the traffic police can be identified. In a general scenario, the identified human behavior may include, but is not limited to, jogging, fast walking, jogging, chest expanding, standing, jumping forward, squatting, punching a fist, lying down, and the like.
In acquiring the image frame sequence, in order to be able to effectively identify the behavior category to which the image frame sequence corresponds, it is generally necessary to make the number of image frames included in the image frame sequence a specific value. For example, the number of image frames in the sequence of image frames is the same as the number of image frames in the sample data of the training neural network model.
When the number of image frames in the image frame sequence is determined, gaussian sampling can be performed on the original image sequence through human body behaviors, namely, an original image is sampled through a preset Gaussian model, so that sampled image frames are obtained, bilinear interpolation can be further performed on the sampled image frames, interpolation image frames are obtained, and the image frame sequence is generated according to the sampled image frames and the interpolation image frames. Wherein the gaussian model may be a mean μ=0, variance σ 2 Gaussian distribution N (μ, σ) =5 2 ) To sample.
When the sampled image frame is less than the longest line in the sample data, the portion of the image frame content that is less than the longest line may be filled with zeros.
The template frame is used to determine the order of the calculated displacement vectors between the skeletal nodes. The calculated displacement vector between the skeleton nodes may be a displacement vector calculated between skeleton nodes in the image frame, or may be a displacement vector of skeleton nodes between image frames of two adjacent frames.
The template frame in the image frame sequence may be any image frame in the image frame sequence. For the convenience of calculation, the first frame may be selected as a template frame corresponding to the image frame sequence.
In step S102, a ranking policy corresponding to the skeletal node is obtained according to the template frame.
When the ranking policy corresponding to the skeletal node is obtained according to the template frame, as shown in fig. 2, the method may include:
in step S201, a distance between an i-th bone node in the template frame and each bone node in the template frame is obtained, wherein i is any bone node in the template frame;
assuming that the template frame includes N skeleton nodes, the displacement (which may be understood as a distance, i.e. a displacement vector excluding a direction) between each skeleton node i and the N skeleton nodes may be obtained, i.e. N distances may be calculated for any skeleton node. Wherein, the distance between the ith bone node and the ith bone node is 0.
As shown in fig. 3, when the 1 st bone node is selected as the 1 st target node in the template frame, the distances between the 1 st target node and the N bone nodes may be calculated. Similarly, when the nth bone node is used as the nth target node, a distance between the nth target node and the N bone nodes may be calculated.
In step S202, the sorting is performed according to the distance, and the sorting policy of the ith skeleton node is determined according to the node sequence corresponding to the distance sorting result.
The displacement calculated for facilitating the subsequent image frames can be more effectively characterized. The method can be used for sorting according to the N distances or the displacement, so as to obtain a sorting result corresponding to the ith skeleton node in the template frame, and according to the sorting result, the node sequence corresponding to the ith frame in the template frame can be determined, and the node sequence can be used as a sorting strategy of the ith skeleton node.
As shown in fig. 3, for the N distances calculated by the 1 st target node, the distances may be sorted from small to large to obtain a distance sequence of the 1 st target node. According to the N distances calculated by the N target nodes, the N target nodes can be sequenced from small to large to obtain a distance sequence of the N target nodes. And determining the corresponding block-shaped characteristics in the image frame through N displacement vectors corresponding to the N target nodes respectively.
And according to the obtained node sequences corresponding to the N distances, respectively serving as the ordering strategies corresponding to the skeleton nodes.
For example, the 1 st bone node calculates distances to the bone nodes 1 to 5 (only a simple illustration) respectively, and N distances are 0 (bone node 1), 7 (bone node 2), 2 (bone node 3), 6 (bone node 4), 5 (bone node 5) respectively, and the distances are sorted from small to large as follows: 0 (bone node 1) →2 (bone node 3) →5 (bone node 5) →6 (bone node 4) →7 (bone node 2), then the order of nodes corresponding to bone node 1 is: 1- & gt 3- & gt 5- & gt 4- & gt 2, so that the node sequence is the ordering strategy corresponding to the skeleton node 1.
Based on the same theory, a ranking strategy corresponding to any skeletal node in the template frame can be determined.
In step S103, a first set of displacement vectors between a bone node in one of the image frames in the sequence of image frames and other bone nodes, and a second set of displacement vectors determined by two adjacent frames in the sequence of image frames, the second set of displacement vectors comprising displacement vectors determined from a bone node in one of the two adjacent frames to a bone node in the other image frame, are acquired.
The first set of displacement vectors corresponding to the image frame may include displacement vectors determined by any bone node i in the image frame and all bone nodes in the image frame. The displacement vector comprises the distance between two bone nodes and the direction determined by the two bone nodes. The direction of the displacement vector can uniformly determine the skeleton node i as a starting point, and all skeleton nodes in the image frame are respectively used as the directions of end points.
For the second displacement vector set determined by two adjacent frames in the image frame sequence, the skeleton node i in the adjacent previous frame can be used as a starting point of a displacement vector, N skeleton nodes in the adjacent next frame are used as ending points of the displacement vector, and N displacement vectors respectively corresponding to any skeleton node i in the N skeleton nodes of the previous frame are determined. It is understood that the direction of the displacement vector is not limited thereto.
In step S104, according to the sorting strategy, the first displacement vector set and the second displacement vector set corresponding to each skeleton node are sorted, so as to generate a node feature block.
After determining N displacement vectors corresponding to any skeleton node i in the frame of the image frame, the N displacement vectors corresponding to the skeleton node i may be ranked according to a predetermined ranking policy, and similarly, after determining N displacement vectors corresponding to any skeleton node i in the previous frame of two adjacent frames, the N displacement vectors corresponding to the skeleton node i in the previous frame may be ranked according to the predetermined ranking policy. As shown in fig. 4, the method specifically includes:
in step S401, a start node and an end node corresponding to a displacement vector in the first displacement vector set, and a start node and an end node corresponding to a displacement vector in the second displacement vector set are obtained, wherein the start node is a common skeleton node in the first displacement vector set or the second displacement vector set;
In order to determine the order of arrangement of the displacement vectors in the first set of displacement vectors and the second set of displacement vectors, it is necessary to determine the different bone nodes comprised in the first set of displacement vectors or the second set of displacement vectors. I.e. including common skeleton nodes and non-common skeleton nodes in the same set of displacement vectors, may be ordered according to the non-common skeleton nodes. When expressed as a non-common skeletal node, it may be expressed as an end node of the displacement vector. The common skeleton node is the start node of the displacement vector, so that the sequence corresponding to the displacement vector can be determined through the end node of the displacement vector.
For the displacement vectors in the second displacement vector set, the starting point node is a skeleton node in the previous frame in the two adjacent frames, and the ending node is a skeleton node in the next frame in the two adjacent frames.
In step S402, determining a node sequence according to a sequencing strategy of the template frame, sequencing the end nodes in the first displacement vector set, and sequencing the end nodes in the second displacement vector set to obtain a displacement vector sequence corresponding to the end nodes;
the predetermined sorting strategy comprises the sorting sequence of skeleton nodes corresponding to the displacement vectors in the ith skeleton node. According to the initial nodes in the displacement vector set, the ith skeleton node in the sorting strategy can be searched, and according to the sorting sequence in the skeleton nodes in the sorting strategy, the displacement vectors in the displacement vector set corresponding to each initial node can be sorted.
That is, for a plurality of displacement vectors corresponding to each starting node, the displacement vectors can be ranked by a ranking strategy corresponding to the starting node. After the N displacement vectors corresponding to the N starting nodes are respectively ordered, the block feature shown in fig. 3 can be obtained.
For example, the number of the obtained skeleton nodes of the human body is N, and when determining the sorting strategy, the sorting strategy corresponding to each skeleton node i is stored. Simplified examples are: when N is 5, if the ordering policy stored by skeletal node 3 is 5,2,1,4,3. When 5 displacement vectors corresponding to the 3 rd bone node in the first displacement vector set are ordered, the displacement vectors are ordered as follows: 5-5,5-2,5-1,5-4,5-3, wherein a-b represents a displacement vector with a starting node a and an ending node b.
In step S403, a node feature block is generated according to the displacement vector sequences respectively determined by the plurality of nodes in the single frame and the displacement vector sequences respectively determined by the plurality of nodes in the two adjacent frames, the number of skeleton nodes included in the image frame, and the number of image frames included in the image frame sequence.
According to the displacement vector sequence respectively determined by a plurality of nodes in a single frame, for example, the sequence of N displacement vectors is respectively determined by N nodes, the first blocky feature with the continuous length of N corresponding to the single frame can be obtained.
And obtaining the second block-shaped characteristics corresponding to the two adjacent frames according to the displacement vector sequence respectively determined by the plurality of nodes in the two adjacent frames.
And arranging the first block-shaped features and the second block-shaped features according to the time frame sequence, and arranging the first block-shaped features and the second block-shaped features which are respectively corresponding to the multiple frames according to the time sequence, so as to obtain the node feature block.
In step S105, the behavior category corresponding to the node feature block is identified by the trained neural network model.
The node characteristic blocks corresponding to the acquired image frame sequences of human behaviors are input into a trained neural network model, so that behavior categories corresponding to the image frame sequences of human behaviors can be obtained, including but not limited to jogging, fast walking, jogging, chest expanding, standing, jumping forward, squatting, boxing and lying down.
In an implementation manner of identifying behavior categories, a second convolution model for identifying attention areas may be introduced, and the second convolution model is used for identifying areas in a node feature block, which mainly contribute to human behavior identification, and an implementation flow may be as shown in fig. 5, where the implementation flow includes:
in step S501, the node feature block is input to a trained first convolution model, and a first feature atlas corresponding to the node feature block is obtained.
The first feature atlas corresponding to the node feature block may be obtained through a basic convolutional neural network CNN, for example, a deep learning network structure such as AlexNet, VGG, etc.
In step S502, the node feature blocks are input into a trained second convolution model based on an attention mechanism, action attention scores are performed on the node feature blocks, and the changed action flows are weighted according to the attention scores, so that a second feature map is obtained;
and extracting a second feature map comprising the attention area in the node feature block according to a second pre-trained convolution model. The method specifically comprises the steps of significant shift score calculation, time attention generation, space attention generation, feature block weighting and the like, and is specifically described as follows:
a first displacement vector set between the skeleton node and other skeleton nodes in the image frames in the image frame sequence acquired according to the steps, and a second displacement vector set determined by two adjacent frames in the image frame sequence, wherein the second displacement vector set comprises displacement vectors determined from the skeleton node in one of the two adjacent frames to the skeleton node in the other image frame; according to the predetermined sorting strategy, sorting the first displacement vector set and the second displacement vector set corresponding to each skeleton node respectively to generate node feature blocks; wherein the node feature blocks comprised of the first set of displacement vectors represent intra-frame block features and the node feature blocks comprised of the second set of displacement vectors represent inter-frame block features.
The feature extraction module in the second convolution model based on the attention mechanism can perform feature extraction on the input node feature blocks to obtain K feature vectors { v } 1 ,...,v K Each feature vector corresponds to a region of the feature map and also corresponds to each node feature block, K represents the number of regions, for example, k=h×w for a feature map of spatial size h×w. We apply a 1 x 1 convolution kernel on the feature map and then use the Sigmoid function to get the saliency for each region as follows:
s i =sigmoid(p s T v i +q s )
wherein p is s And q s Is a learning parameter. s is(s) i Is the saliency of the i-th region. The saliency of all regions constitutes a saliency map S. According to the eigenvector v of each region i And significance s i We calculate the feature of interest in each region, denoted as a i The following is shown:
a i =s i (m a T v i +n a )
wherein m is a And n a Is a learning parameter. Next we calculate a normalized attention weight wi for each feature region as follows:
w i =(a w T v i +b w )
α=N.(w)
where each element wi of vector w is the attention weight of the i-th region, N.() is a normalization operator, and limits the sum of the weights of all positions to 1, a w And b w Is a learning parameter.
In the above way, we generate spatial attention and temporal attention to the node feature blocks, where the spatial attention corresponds to the attention weight obtained by each node feature block in the intra-block feature, and the temporal attention corresponds to the attention weight obtained by each node feature block in the inter-block feature, and combine the spatial attention and the temporal attention, if the spatial attention weight and the temporal attention weight are both greater than a preset threshold, consider that the node changes greatly, which indicates that the relevant node is a key location where the motion changes, and then weight the corresponding node feature block based on the spatial attention and the temporal attention. For example, in one implementation, the spatial attention weight and the temporal attention weight may be summed and averaged to obtain the second feature map.
In step S503, the feature maps in the first feature map set are respectively fused with the second feature map, and the human behavior is classified according to the fused third feature map.
And respectively fusing the second characteristic diagram with each characteristic diagram in the first characteristic diagram set, such as by multiplying, so that the attention area of the characteristic diagram in the first characteristic diagram set is enhanced, noise information in other areas is suppressed to a certain extent, and thus, the behavior type of the image frame sequence can be more accurately identified.
In one possible implementation manner, in the calculation process of the first feature atlas, the second feature atlas may be respectively fused, so that the first convolution model may perform calculation and identification on the specific area more accurately.
In addition, before implementing the present application, a process of training the convolutional neural network may be further included, as shown in fig. 6, which may include:
in step S601, sample data of human body behaviors, and sample behavior types and sample attention areas corresponding to the sample data are obtained;
wherein the sample data may include a plurality of different behavior types of a plurality of users of different ages, and a plurality of different behavior types of a plurality of users of different heights. For example, people may be selected with an age distribution of from 19 to 55 years old and a height of from 1.55 to 1.90 meters. The collected behavior types may include, for example: jogging, fast walking, jogging, chest expanding, standing, jumping forward, squatting, punching a fist, lying down and the like. The behavior may be repeated a predetermined number of times for the same behavior type.
Wherein the sample data may be obtained from an original image sequence. After the original image sequence is acquired, the original image sequence can be sampled through a preset Gaussian distribution model, and an image frame sequence corresponding to human body behaviors can be obtained through bilinear interpolation, so that the acquired image frame sequence has the same image frame length, and subsequent model training and behavior type identification on the image frame sequence are facilitated.
After the image frame sequence is acquired, skeleton nodes included in the sample data can be extracted, and node feature blocks corresponding to the image frame sequence are obtained according to displacement vectors of the skeleton nodes in skeleton node frames and displacement vectors of the skeleton nodes between two frames and by combining a preset ordering strategy. Besides marking the behavior type corresponding to the node characteristic block in the sample data, the attention area in the node characteristic block can be marked, so that the accurate attention area can be obtained through training.
In step S602, the sample data of the human body line is input into a neural network model to obtain an attention recognition area output by a second convolution model, and a behavior recognition type obtained by fusing the attention recognition area and a feature atlas output by the first convolution model;
And inputting the sample data into a neural network model, outputting an attention recognition area through a second convolution model, and fusing the attention recognition area with the characteristic image set output by the first convolution model to obtain a behavior recognition type.
In step S603, the parameters of the second convolution model are optimized according to the difference between the attention identifying region and the sample attention region, until the difference between the attention identifying region and the sample attention region meets a preset requirement,
the difference between the two is determined from a pre-calibrated sample attention area, in combination with the identified attention area. And judging whether the difference meets the preset difference requirement, if not, re-inputting sample data, and training the second convolution model until the difference between the attention sample area and the attention recognition area of the trained model meets the preset requirement.
In step S604, parameters of the first convolution model are optimized according to the difference between the behavior recognition type and the sample behavior type, until the difference between the behavior recognition type and the sample behavior type meets a preset requirement.
According to the attention recognition area determined by the second convolution model, the attention recognition area is multiplied by the image concentrated by the image obtained by the first convolution model respectively to obtain the characteristic enhancement and the prominence of the attention recognition area, noise information in other areas is suppressed to a certain extent, the behavior recognition type is obtained according to the fused image recognition, the behavior recognition type is compared with the sample behavior type, the parameters of the convolution neural network are adjusted according to the comparison result until the sample behavior type in all sample data is the same as the behavior recognition type, the parameters corresponding to the optimized convolution neural network are obtained, and the human body behavior is conveniently recognized according to the trained neural network model.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 7 is a schematic structural diagram of a human behavior recognition device according to an embodiment of the present application, which is described in detail below:
the human behavior recognition apparatus includes:
a template frame determining unit 701, configured to obtain an image frame sequence corresponding to a human behavior, and determine a template frame in the image frame sequence;
a ranking policy obtaining unit 702, configured to obtain a ranking policy corresponding to a skeletal node according to the template frame;
a displacement vector obtaining unit 703, configured to obtain a first set of displacement vectors between a bone node in an image frame in the image frame sequence and other bone nodes, and a second set of displacement vectors determined by two adjacent frames in the image frame sequence, where the second set of displacement vectors includes a displacement vector determined from a bone node in one of the two adjacent frames to a bone node in the other image frame;
a node feature block generating unit 704, configured to sort the first displacement vector set and the second displacement vector set corresponding to each skeletal node according to the sorting policy, so as to generate a node feature block;
And the human body behavior recognition unit 705 is configured to recognize a behavior class corresponding to the node feature block through a trained neural network model.
The human body behavior recognition device corresponds to the human body behavior recognition method shown in fig. 1.
Fig. 8 is a schematic diagram of a human behavior recognition apparatus according to an embodiment of the present application. As shown in fig. 8, the human behavior recognition apparatus 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a human behavior recognition program, stored in the memory 81 and executable on the processor 80. The processor 80, when executing the computer program 82, implements the steps of the various human behavior recognition method embodiments described above. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 82 in the human behavior recognition device 8. For example, the computer program 82 may be partitioned into:
The template frame determining unit is used for acquiring an image frame sequence corresponding to human behaviors and determining template frames in the image frame sequence;
the ordering strategy acquisition unit is used for acquiring an ordering strategy corresponding to the skeleton node according to the template frame;
a displacement vector acquisition unit, configured to acquire a first set of displacement vectors between a bone node in an image frame sequence and other bone nodes, and a second set of displacement vectors determined by two adjacent frames in the image frame sequence, where the second set of displacement vectors includes a displacement vector determined from a bone node in one of the two adjacent frames to a bone node in the other image frame;
the node characteristic block generating unit is used for respectively sequencing the first displacement vector set and the second displacement vector set corresponding to each skeleton node according to the sequencing strategy to generate a node characteristic block;
and the human body behavior recognition unit is used for recognizing the behavior category corresponding to the node characteristic block through the trained neural network model.
The human behavior recognition device 8 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The human behavior recognition device may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the human behavior recognition device 8 and does not constitute a limitation of the human behavior recognition device 8, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the human behavior recognition device may further include an input-output device, a network access device, a bus, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the human behavior recognition device 8, for example, a hard disk or a memory of the human behavior recognition device 8. The memory 81 may also be an external storage device of the human behavior recognition device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the human behavior recognition device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the human behavior recognition device 8. The memory 81 is used for storing the computer program and other programs and data required for the human behavior recognition apparatus. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (9)
1. A human behavior recognition method, characterized in that the human behavior recognition method comprises:
acquiring an image frame sequence corresponding to human behaviors, and determining a template frame in the image frame sequence;
acquiring a sequencing strategy corresponding to the skeleton node according to the template frame;
acquiring a first displacement vector set between a bone node and other bone nodes in image frames in an image frame sequence and a second displacement vector set determined by two adjacent frames in the image frame sequence, wherein the second displacement vector set comprises displacement vectors determined from the bone node in one of the two adjacent frames to the bone node in the other image frame;
According to the sorting strategy, sorting the first displacement vector set and the second displacement vector set corresponding to each skeleton node respectively to generate node feature blocks;
identifying behavior categories corresponding to the node feature blocks through a trained neural network model;
the step of identifying the behavior category corresponding to the node characteristic block through the trained neural network model comprises the following steps:
inputting the node characteristic blocks into a trained first convolution model to obtain a first characteristic atlas corresponding to the node characteristic blocks;
inputting the node characteristic blocks into a trained second convolution model based on an attention mechanism, scoring the node characteristic blocks with action attention, and weighting the changed action flow according to the action attention score to obtain a second characteristic diagram;
and respectively fusing the feature images in the first feature image set with the second feature image, and classifying the human behaviors according to the fused third feature image.
2. The method of claim 1, wherein inputting the node feature block into a trained second convolution model based on an attention mechanism, scoring the node feature block for action attention, weighting the changed action flow according to the action attention score, and obtaining a second feature map comprises:
Performing significance scoring on the node feature block based on a second convolution model of the attention mechanism;
directing spatial attention and temporal attention to all areas of the node feature block by saliency scoring;
performing action attention score calculation on the node feature blocks based on the spatial attention and the time attention; and weighting the node characteristic blocks corresponding to the changed action flow according to the action attention scores to obtain a second characteristic diagram.
3. The human behavior recognition method according to claim 1, wherein the step of acquiring the image frame sequence corresponding to the human behavior comprises:
acquiring an original image sequence of human body behaviors;
sampling the original image sequence according to a preset Gaussian distribution model;
and obtaining an image frame sequence corresponding to the human behavior through bilinear interpolation.
4. The human behavior recognition method according to claim 1, wherein the step of acquiring the ranking strategy corresponding to the skeletal node according to the template frame comprises:
acquiring the distance between an ith bone node in a template frame and each bone node in the template frame, wherein i is any bone node in the template frame;
Sorting according to the distance, and determining the sorting strategy of the ith skeleton node according to the node sequence corresponding to the distance sorting result.
5. The method for identifying human behavior according to claim 1, wherein the step of sorting the first displacement vector set and the second displacement vector set corresponding to each skeletal node according to the sorting policy, respectively, and generating the node feature block includes:
acquiring a start node and an end node corresponding to displacement vectors in a first displacement vector set, and a start node and an end node corresponding to displacement vectors in a second displacement vector set, wherein the start node is a public skeleton node in the first displacement vector set or the second displacement vector set;
determining a node sequence according to a sequencing strategy of the template frame, sequencing the end nodes in the first displacement vector set, and sequencing the end nodes in the second displacement vector set to obtain a displacement vector sequence corresponding to the end nodes;
and generating node characteristic blocks according to the displacement vector sequences respectively determined by a plurality of nodes in a single frame and the displacement vector sequences respectively determined by a plurality of nodes in two adjacent frames, wherein the number of skeleton nodes in the image frames and the number of image frames in the image frame sequence are included.
6. The human behavior recognition method according to claim 1, further comprising:
acquiring sample data of human body behaviors, and a sample behavior type and a sample attention area corresponding to the sample data;
inputting the sample data of the human body behaviors into a neural network model to obtain an attention recognition area output by a second convolution model and a behavior recognition type obtained by fusing the attention recognition area and a feature diagram output by a first convolution model;
optimizing parameters of the second convolution model according to the difference between the attention recognition area and the sample attention area until the difference between the attention recognition area and the sample attention area meets a preset requirement;
and optimizing parameters of the first convolution model according to the difference between the behavior recognition type and the sample behavior type until the difference between the behavior recognition type and the sample behavior type meets a preset requirement.
7. A human behavior recognition apparatus, characterized in that the human behavior recognition apparatus comprises:
the template frame determining unit is used for acquiring an image frame sequence corresponding to human behaviors and determining template frames in the image frame sequence;
The ordering strategy acquisition unit is used for acquiring an ordering strategy corresponding to the skeleton node according to the template frame;
a displacement vector acquisition unit, configured to acquire a first set of displacement vectors between a bone node in an image frame sequence and other bone nodes, and a second set of displacement vectors determined by two adjacent frames in the image frame sequence, where the second set of displacement vectors includes a displacement vector determined from a bone node in one of the two adjacent frames to a bone node in the other image frame;
the node characteristic block generating unit is used for respectively sequencing the first displacement vector set and the second displacement vector set corresponding to each skeleton node according to the sequencing strategy to generate a node characteristic block;
the human behavior recognition unit is used for recognizing the behavior category corresponding to the node characteristic block through the trained neural network model;
the human behavior recognition unit includes:
a first feature atlas obtaining subunit, configured to input the node feature block to a trained first convolution model, and obtain a first feature atlas corresponding to the node feature block;
the second feature image acquisition subunit is used for inputting the node feature blocks into a trained second convolution model based on an attention mechanism, scoring the action attention of the node feature blocks, and weighting the changed action flow according to the action attention score to obtain a second feature image;
And the fusion classification subunit is used for respectively fusing the feature images in the first feature image set with the second feature images and classifying the human behaviors according to the fused third feature images.
8. A human behavior recognition device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the human behavior recognition method according to any one of claims 1 to 6 when the computer program is executed by the processor.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the human behavior recognition method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010209871.XA CN111476115B (en) | 2020-03-23 | 2020-03-23 | Human behavior recognition method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010209871.XA CN111476115B (en) | 2020-03-23 | 2020-03-23 | Human behavior recognition method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111476115A CN111476115A (en) | 2020-07-31 |
CN111476115B true CN111476115B (en) | 2023-08-29 |
Family
ID=71748330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010209871.XA Active CN111476115B (en) | 2020-03-23 | 2020-03-23 | Human behavior recognition method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476115B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008130903A1 (en) * | 2007-04-17 | 2008-10-30 | Mikos, Ltd. | System and method for using three dimensional infrared imaging for libraries of standardized medical imagery |
WO2018126956A1 (en) * | 2017-01-05 | 2018-07-12 | 腾讯科技(深圳)有限公司 | Method and device for information processing, and server |
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
CN109800659A (en) * | 2018-12-26 | 2019-05-24 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | A kind of action identification method and device |
-
2020
- 2020-03-23 CN CN202010209871.XA patent/CN111476115B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008130903A1 (en) * | 2007-04-17 | 2008-10-30 | Mikos, Ltd. | System and method for using three dimensional infrared imaging for libraries of standardized medical imagery |
WO2018126956A1 (en) * | 2017-01-05 | 2018-07-12 | 腾讯科技(深圳)有限公司 | Method and device for information processing, and server |
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
CN109800659A (en) * | 2018-12-26 | 2019-05-24 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | A kind of action identification method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111476115A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111881705B (en) | Data processing, training and identifying method, device and storage medium | |
CN110472531B (en) | Video processing method, device, electronic equipment and storage medium | |
CN109522942B (en) | Image classification method and device, terminal equipment and storage medium | |
CN110738101A (en) | Behavior recognition method and device and computer readable storage medium | |
Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
CN113326835B (en) | Action detection method and device, terminal equipment and storage medium | |
CN110633004B (en) | Interaction method, device and system based on human body posture estimation | |
CN106295531A (en) | A kind of gesture identification method and device and virtual reality terminal | |
CN114495241B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN113435432B (en) | Video anomaly detection model training method, video anomaly detection method and device | |
CN112149602A (en) | Action counting method and device, electronic equipment and storage medium | |
Ghosh et al. | Contextual rnn-gans for abstract reasoning diagram generation | |
JP6381368B2 (en) | Image processing apparatus, image processing method, and program | |
CN111694954B (en) | Image classification method and device and electronic equipment | |
CN111353325A (en) | Key point detection model training method and device | |
KR20210054349A (en) | Method for predicting clinical functional assessment scale using feature values derived by upper limb movement of patients | |
CN114168768A (en) | Image retrieval method and related equipment | |
CN117854155A (en) | Human skeleton action recognition method and system | |
CN111652168B (en) | Group detection method, device, equipment and storage medium based on artificial intelligence | |
CN112990009A (en) | End-to-end-based lane line detection method, device, equipment and storage medium | |
CN111476115B (en) | Human behavior recognition method, device and equipment | |
CN116959097A (en) | Action recognition method, device, equipment and storage medium | |
CN113724176B (en) | Multi-camera motion capture seamless connection method, device, terminal and medium | |
CN117011566A (en) | Target detection method, detection model training method, device and electronic equipment | |
CN112257642B (en) | Human body continuous motion similarity evaluation method and evaluation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |