CN113449679A - Method and device for identifying abnormal behaviors of human body - Google Patents

Method and device for identifying abnormal behaviors of human body Download PDF

Info

Publication number
CN113449679A
CN113449679A CN202110797329.5A CN202110797329A CN113449679A CN 113449679 A CN113449679 A CN 113449679A CN 202110797329 A CN202110797329 A CN 202110797329A CN 113449679 A CN113449679 A CN 113449679A
Authority
CN
China
Prior art keywords
convolution
discriminator
residual block
training
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110797329.5A
Other languages
Chinese (zh)
Other versions
CN113449679B (en
Inventor
涂宏斌
占天华
高晓飞
李�杰
聂芳华
张航
罗琨
丁莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Great Wall Science And Technology Information Co ltd
Original Assignee
Hunan Great Wall Science And Technology Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Great Wall Science And Technology Information Co ltd filed Critical Hunan Great Wall Science And Technology Information Co ltd
Priority to CN202110797329.5A priority Critical patent/CN113449679B/en
Publication of CN113449679A publication Critical patent/CN113449679A/en
Application granted granted Critical
Publication of CN113449679B publication Critical patent/CN113449679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for identifying abnormal behaviors of a human body, which comprise the following steps: acquiring training video data in a security check scene, performing frame preprocessing to obtain a training video frame image set, classifying each training video frame image and giving label information to each training video frame image as a training sample set; constructing a generator according to a first residual error network constructed by a generator residual block, constructing a discriminator according to a second residual error network constructed by a discriminator residual block, and training the generator and a generated countermeasure network constructed by the discriminator according to a training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model; constructing a classifier according to the optimized discriminator model, and training the classifier to obtain a classifier model; the method comprises the steps of obtaining video data in a security check scene, performing frame pre-processing on the video data to obtain a video frame image, inputting the video frame image into a classifier model, and obtaining an abnormal behavior identification classification result. The abnormal behaviors of the human body under the security inspection scene are efficiently identified.

Description

Method and device for identifying abnormal behaviors of human body
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a method and a device for identifying abnormal behaviors of a human body.
Background
The security check is an essential link in the process of various traffic travel modes. However, during the security inspection process, various accidents often occur, which adversely affect the security and rights of the security inspector.
In the prior art, the efficiency is very low because the human body abnormal behavior is usually identified through the on-site self judgment of a security inspector or manual video monitoring, so that the development of an intelligent real-time monitoring and identifying technology for the human body abnormal behavior is used for identifying the human body abnormal behavior in a security inspection scene, and has great practical significance for protecting the safety and rights of the security inspector.
Disclosure of Invention
Aiming at the technical problems, the invention provides a method and a device for identifying abnormal human behaviors, which have high identification efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows:
in one embodiment, a method for identifying abnormal human behavior includes the following steps:
step S100: acquiring training video data in a security check scene, performing framing preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set, and giving label information to serve as a training sample set;
step S200: constructing a generator according to a first residual error network constructed by a generator residual block, constructing a discriminator according to a second residual error network constructed by a discriminator residual block, and training the generator and a generated countermeasure network constructed by the discriminator according to a training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model;
step S300: constructing a classifier according to the optimized discriminator model, and training the classifier according to a training sample set, label information corresponding to each training video frame image, a preset second loss function and a preset optimization function to obtain a classifier model;
step S400: the method comprises the steps of obtaining video data in a security check scene, performing frame pre-processing on the video data to obtain a video frame image, inputting the video frame image into a classifier model, and obtaining an abnormal behavior identification classification result.
Preferably, the generator includes a first input layer, a first fully-connected layer, a first residual network and a first output layer, the first residual network includes a first generator residual block, a second generator residual block, a third generator residual block, a fourth generator residual block and a fifth generator residual block, and the step S200 of building the generator according to the first residual network built by the generator residual block specifically includes:
the first input layer inputs noise with preset dimensionality, and the input is converted into 4 x 256 through the first full-connection layer;
connecting a first generator residual block, wherein the size of a convolution kernel in the first generator residual block is 3x3, the number of the convolution kernels is 256, the front transposition convolution step is 2, the step of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step of 2;
connecting a second generator residual block, wherein the size of a convolution kernel in the second generator residual block is 3x3, the number of the convolution kernels is 128, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a third generator residual block, wherein the size of a convolution kernel in the third generator residual block is 3x3, the number of the convolution kernels is 64, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a fourth generator residual block, wherein the size of a convolution kernel in the fourth generator residual block is 3x3, the number of the convolution kernels is 32, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a fifth generator residual block, wherein the size of a convolution kernel in the fifth generator residual block is 3x3, the number of the convolution kernels is 16, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
the first output layer, which comprises a transposed convolution with step size 1, resulting in a 128 x 128 generated image, is connected, using the Tanh activation function.
Preferably, the discriminator includes a second input layer, a second residual error network, a first pooling layer, and a second full-link layer, where the second residual error network includes a first discriminator residual block, a second discriminator residual block, a third discriminator residual block, and a fourth discriminator residual block, and the step S200 of building the discriminator according to the second residual error network built by the discriminator residual block specifically includes:
the second input layer inputs a training video frame image set and a generated image output by the generator, the training video frame image set and the generated image comprise convolution with 4 x 4 and step length of 2, the number of convolution kernels of the convolution is 32, and the second input layer uses a LeakyRelu activation function;
connecting a first discriminator residual block, wherein the size of a convolution kernel of the first discriminator residual block is 3x3, the number of the convolution kernels is 64, the front convolution step is 1, the step size of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step size being 2;
connecting a second discriminator residual block, wherein the size of a convolution kernel of the second discriminator residual block is 3x3, the number of the convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a third discriminator residual block, wherein the size of a convolution kernel of the third discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a fourth discriminator residual block, wherein the size of a convolution kernel of the fourth discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a first pooling layer, and using adaptive average pooling to enable the size of the output feature map to be 1;
and connecting the second full-connection layer, outputting a one-dimensional discriminator result and using a sigmoid function.
Preferably, in step S200, training the generator and the generated countermeasure network constructed by the discriminator according to the training sample set, the preset first loss function, and the preset optimization function to obtain an optimized discriminator model, including:
step S210: using noise with a preset dimensionality as the input of a generator to obtain a generated image;
step S220: and taking the generated image and each training video frame image in the training sample set as the input of the discriminator, adding the input of the discriminator into the countermeasure training and iterating the training times, judging whether the discriminator can correctly judge whether the input is the training video frame image or the generated image, and performing back propagation to optimize network parameters of the generator and the discriminator according to a preset first loss function and a preset optimization function until the iterating training times reach a preset first iterating time threshold, thus finishing the training to obtain an optimized discriminator model.
Preferably, the determining whether the discriminator can correctly determine whether the input is the training video frame image or the generated image in step S220 includes:
when the discriminator can correctly judge whether the input is a training video frame image or a generated image, performing back propagation according to a preset first loss function and a preset optimization function, and returning to the step S210;
and when the discriminator cannot judge whether the input is a training video frame image or a generated image, performing back propagation according to a preset first loss function and a preset optimization function.
Preferably, the preset first loss function is specifically:
loss=-wi[yi logxi+(1-yi)log(1-xi)
wherein, yiRepresents a theoretical label, wiRepresents a weight, xiRepresenting the actual predicted label.
Preferably, the result of the classifier includes a third input layer, a first discriminator residual block, a second discriminator residual block, a third discriminator residual block, a fourth discriminator residual block, a second pooling layer, and a third full-link layer, and the step S300 is to construct the classifier according to the optimized discriminator model, specifically:
inputting a training sample set and label information corresponding to each training video frame image by a third input layer, wherein the third input layer comprises convolution with 4 x 4 and step length of 2, the number of convolution kernels is 32, and a LeakyRelu activation function is used;
connecting a first discriminator residual block, wherein the size of a convolution kernel of the first discriminator residual block is 3x3, the number of the convolution kernels is 64, the front convolution step is 1, the step size of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step size being 2;
connecting a second discriminator residual block, wherein the size of a convolution kernel of the second discriminator residual block is 3x3, the number of the convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a third discriminator residual block, wherein the size of a convolution kernel of the third discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a fourth discriminator residual block, wherein the size of a convolution kernel of the fourth discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a second pooling layer, and using adaptive average pooling to enable the size of the output characteristic diagram to be 1;
and connecting the third full-connection layer, outputting a one-dimensional classifier result and using a Softmax function.
Preferably, in step S300, training the classifier according to the training sample set, the label information corresponding to each training video frame image, the preset second loss function, and the preset optimization function to obtain a classifier model, including:
step S310: training the classifier according to the training sample set and the label information corresponding to each training video frame image and iterating the training times;
step S320: and performing back propagation and optimizing the network parameters of the classifier according to a preset second loss function and a preset optimization function until the iterative training times reach a preset second iterative times threshold value, and obtaining a classifier model.
Preferably, the preset second loss function is specifically:
Figure BDA0003163223800000041
wherein x [ class ] represents the label information of the training video frame image, and x [ j ] represents the sample distribution of the jth training video frame image.
In one embodiment, a human body abnormal behavior recognition apparatus includes:
the system comprises a preprocessing module, a training sample set and a label information acquiring module, wherein the preprocessing module is used for acquiring training video data in a security check scene, performing frame preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set and giving label information to each training video frame image to serve as the training sample set;
the generator discriminator model training module is used for building a generator according to a first residual error network built by a generator residual block, building a discriminator according to a second residual error network built by a discriminator residual block, and training the generator and a generated countermeasure network built by the discriminator according to a training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model;
the classifier model training module is used for constructing a classifier according to the optimized discriminator model, and training the classifier according to the training sample set, the label information corresponding to each training video frame image, a preset second loss function and a preset optimization function to obtain a classifier model;
and the abnormal behavior identification module is used for acquiring video data in a security check scene, performing frame pre-processing on the video data to obtain a video frame image, and inputting the video frame image into the classifier model to obtain an abnormal behavior identification classification result.
The method and the device for identifying the abnormal human body behaviors comprise the steps of firstly obtaining training video data in a security inspection scene, performing framing pretreatment on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set and giving label information to the training video frame image as a training sample set, then designing a generator constructed by a generator residual block to optimize a discriminator constructed by the discriminator residual block, performing back propagation to adjust network parameters of the generator and the discriminator by combining a preset first loss function and a preset optimization function in a countermeasure game for generating a countermeasure network to obtain an optimal discriminator model, then constructing a classifier by using the optimal discriminator, performing back propagation to adjust the network parameters of the classifier by combining a preset second loss function and a preset optimization function in a training process to obtain an optimal classifier model for identifying the abnormal human body behaviors, the abnormal behavior of the human body under the security inspection scene can be rapidly and efficiently monitored and identified.
Drawings
Fig. 1 is a flowchart of a method for identifying abnormal human behavior according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network architecture of a generator of the present invention;
FIG. 3 is a block diagram of a generator residual block of the present invention;
FIG. 4 is a schematic diagram of the network structure of the discriminator according to the invention;
FIG. 5 is a block diagram of a discriminator residual block of the present invention;
fig. 6 is a schematic diagram illustrating a method for recognizing abnormal human behavior according to an embodiment of the present invention;
fig. 7 is a schematic diagram of the network structure of the classifier of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the present invention is further described in detail below with reference to the accompanying drawings.
In one embodiment, as shown in fig. 1, a method for identifying abnormal human behavior includes the following steps:
step S100: the method comprises the steps of obtaining training video data in a security check scene, carrying out frame preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set, and giving label information to serve as a training sample set.
Specifically, video data under a complete security check scene is acquired, and preprocessing comprises the steps of performing framing processing on a video image to obtain pictures, and placing the pictures in folders of different types, namely, giving label information to each picture to serve as a training sample set.
Step S200: the method comprises the steps of constructing a generator according to a first residual error network constructed by a generator residual error block, constructing a discriminator according to a second residual error network constructed by a discriminator residual error block, and training the generator and a generated countermeasure network constructed by the discriminator according to a training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model.
Specifically, a pitoch is selected as a framework building platform for deep learning, a human body abnormal behavior recognition model under a security check scene is built, a first residual error network built by a generator residual error block is used for building a generator, a second residual error network formed by a discriminator residual error block is used for building a discriminator, a generator discriminator model in a countermeasure network is built and generated, a generator discriminator model is compiled again, the generator discriminator model uses BCELoss (binary Cross) inversion loss functions, namely corresponding to preset first loss functions, optimization functions of the two models are Adam, namely corresponding to preset optimization functions, the generation countermeasure network built by the generator and the discriminator is used for training to obtain an optimal discriminator model, wherein the training epoch is set to be 50 times, the number of samples contained in each batch is 64, and the learning rate is 0.0002.
Step S300: and constructing to obtain a classifier according to the optimized discriminator model, and training the classifier according to the training sample set, the label information corresponding to each training video frame image, a preset second loss function and a preset optimization function to obtain the classifier model.
Specifically, a classifier is built based on a trained discriminator model, the classifier uses a cross entropy loss function, namely a corresponding preset second loss function, the optimization function of the classifier model is Adam, namely a corresponding preset optimization function, the classifier is trained according to a training sample set, label information corresponding to each training video frame image, the preset second loss function and the preset optimization function, wherein the epoch of the training is set to be 30 times, the number of samples contained in each batch is 64, and the learning rate is 0.0002.
Step S400: the method comprises the steps of obtaining video data in a security check scene, performing frame pre-processing on the video data to obtain a video frame image, inputting the video frame image into a classifier model, and obtaining an abnormal behavior identification classification result.
Specifically, after a model for identifying the human body abnormal behavior is built and trained, formal detection is performed, the video frame image is input into a human body abnormal behavior classifier, and classification results of the human body behavior in a security inspection scene are obtained, wherein the classification results include the category number and the specific category of the human body abnormal behavior.
In one embodiment, the generator includes a first input layer, a first fully-connected layer, a first residual network, and a first output layer, where the first residual network includes a first generator residual block, a second generator residual block, a third generator residual block, a fourth generator residual block, and a fifth generator residual block, and the first residual network constructed according to the generator residual blocks in step S200 is specifically:
the first input layer inputs noise with preset dimensionality, and the input is converted into 4 x 256 through the first full-connection layer;
connecting a first generator residual block, wherein the size of a convolution kernel in the first generator residual block is 3x3, the number of the convolution kernels is 256, the front transposition convolution step is 2, the step of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step of 2;
connecting a second generator residual block, wherein the size of a convolution kernel in the second generator residual block is 3x3, the number of the convolution kernels is 128, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a third generator residual block, wherein the size of a convolution kernel in the third generator residual block is 3x3, the number of the convolution kernels is 64, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a fourth generator residual block, wherein the size of a convolution kernel in the fourth generator residual block is 3x3, the number of the convolution kernels is 32, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
connecting a fifth generator residual block, wherein the size of a convolution kernel in the fifth generator residual block is 3x3, the number of the convolution kernels is 16, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 x 1 and the step size of 2;
the first output layer, which comprises a transposed convolution with step size 1, resulting in a 128 x 128 generated image, is connected, using the Tanh activation function.
Specifically, the network structure diagram of the generator is shown in fig. 2, the generator includes a first input layer, a first fully-connected layer, a first residual network, and a first output layer, where the first residual network includes the first to fifth generator residual blocks described above, the generator residual block is obtained by modifying a conventional residual block, and the structure of the generator residual block is shown in fig. 3, where BN in the residual block composition represents batch normalization, LeakyRelu is an activation function, convtranspase 2d represents transposed convolution, and 3x3 represents the size of a convolution kernel. The generator uses an improved residual block, the convolutional layer is placed behind the batch standardization and the activation function, the input firstly passes through the batch standardization and then passes through the activation function and the convolutional layer in sequence, the regularization of the model is enhanced, and the influence of overfitting is reduced.
In one embodiment, the discriminator includes a second input layer, a second residual error network, a first pooling layer, and a second full-link layer, where the second residual error network includes a first discriminator residual block, a second discriminator residual block, a third discriminator residual block, and a fourth discriminator residual block, and the constructing the discriminator according to the second residual error network constructed by the discriminator residual block in step S200 specifically includes:
the second input layer inputs a training video frame image set and a generated image output by the generator, the training video frame image set and the generated image comprise convolution with 4 x 4 and step length of 2, the number of convolution kernels of the convolution is 32, and the second input layer uses a LeakyRelu activation function;
connecting a first discriminator residual block, wherein the size of a convolution kernel of the first discriminator residual block is 3x3, the number of the convolution kernels is 64, the front convolution step is 1, the step size of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step size being 2;
connecting a second discriminator residual block, wherein the size of a convolution kernel of the second discriminator residual block is 3x3, the number of the convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a third discriminator residual block, wherein the size of a convolution kernel of the third discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a fourth discriminator residual block, wherein the size of a convolution kernel of the fourth discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting the pooling layers, wherein the size of the output characteristic diagram is 1;
and connecting the second full-connection layer, outputting a one-dimensional discriminator result and using a sigmoid function.
Specifically, the network structure of the discriminator is shown in fig. 4, and the discriminator residual block is shown in fig. 5, where BN in the residual block diagram represents batch normalization, LeakyRelu is an activation function, Conv2d represents convolution, and 3x3 represents the size of the convolution kernel.
In one embodiment, the training of the generator and the generated countermeasure network constructed by the discriminator according to the training sample set, the preset first loss function and the preset optimization function in step S200 to obtain the optimized discriminator model includes:
step S210: using noise with a preset dimensionality as the input of a generator to obtain a generated image;
step S220: and taking the generated image and each training video frame image in the training sample set as the input of the discriminator, adding the input of the discriminator into the countermeasure training and iterating the training times, judging whether the discriminator can correctly judge whether the input is the training video frame image or the generated image, and performing back propagation to optimize network parameters of the generator and the discriminator according to a preset first loss function and a preset optimization function until the iterating training times reach a preset first iterating time threshold, thus finishing the training to obtain an optimized discriminator model.
Preferably, as shown in fig. 6, the noise of the preset dimension is 100 dimensions, the generator and the discriminator in the self-generated countermeasure network are in continuous countermeasure, the generator wants to generate a generated image that the discriminator cannot judge whether the generated image is true or false, the discriminator wants to have strong self-discrimination capability so as to discriminate the generated image from the real image, the two are continuously in countermeasure, the loss value is calculated through the preset first loss function, and the network parameters of the generator and the discriminator are continuously adjusted through the preset optimization function, so as to achieve the optimal model.
In one embodiment, the determining whether the discriminator can correctly determine whether the input is the training video frame image or the generated image in step S220 includes:
when the discriminator can correctly judge whether the input is a training video frame image or a generated image, performing back propagation according to a preset first loss function and a preset optimization function, and returning to the step S210;
and when the discriminator cannot judge whether the input is a training video frame image or a generated image, performing back propagation according to a preset first loss function and a preset optimization function.
Specifically, when the discriminator can correctly judge whether the input image is a training video frame image or a generated image (corresponding to the image T in fig. 6), it indicates that the generator discriminator model has not reached balance yet, the network parameters of the generator and the discriminator are adjusted by back propagation according to the preset first loss function and the preset optimization function, the generator further needs to continue to generate a generated image more similar to the training video frame image, the countertraining is continued until the discriminator cannot judge whether the input image is the training video frame image or the generated image (corresponding to the image F in fig. 6), it indicates that the generator discriminator model reaches balance, the network parameters of the generator and the discriminator are adjusted by back propagation according to the preset first loss function and the preset optimization function until the number of iterative training reaches the preset first iteration number threshold, the training process is ended.
In one embodiment, the preset first loss function is specifically:
loss=-wi[yi logxi+(1-yi)log(1-xi)
wherein, yiRepresents a theoretical label, wiRepresents a weight, xiRepresenting the actual predicted label.
In one embodiment, the result of the classifier includes a third input layer, a first discriminator residual block, a second discriminator residual block, a third discriminator residual block, a fourth discriminator residual block, a second pooling layer, and a third full-link layer, and the step S300 is to construct the classifier according to the optimized discriminator model, specifically:
inputting a training sample set and label information corresponding to each training video frame image by a third input layer, wherein the third input layer comprises convolution with 4 x 4 and step length of 2, the number of convolution kernels is 32, and a LeakyRelu activation function is used;
connecting a first discriminator residual block, wherein the size of a convolution kernel of the first discriminator residual block is 3x3, the number of the convolution kernels is 64, the front convolution step is 1, the step size of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step size being 2;
connecting a second discriminator residual block, wherein the size of a convolution kernel of the second discriminator residual block is 3x3, the number of the convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a third discriminator residual block, wherein the size of a convolution kernel of the third discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting a fourth discriminator residual block, wherein the size of a convolution kernel of the fourth discriminator residual block is 3x3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, and shortcut is the convolution with the size of the used convolution kernel being 1 x 1 and the step being 2;
connecting the second pooling layer, and outputting a characteristic diagram with the size of 1;
and connecting the third full-connection layer, outputting a one-dimensional classifier result and using a Softmax function.
Specifically, the classifier shares a partial structure of the classifier (the last layer is removed), the dimension output by the third full-connection layer of the last layer is n (n represents the number of categories of abnormal behaviors of the human body), the output result is subjected to a loss function obtained by cross entropy by a Softmax function and label information, the classifier also uses an adam optimizer, the network structure of the classifier is shown in FIG. 7, and the network structure of the classifier comprises a third input layer, first to fourth classifier residual blocks, a second pooling layer and a third full-connection layer.
In one embodiment, in step S300, training the classifier according to the training sample set, the label information corresponding to each training video frame image, the preset second loss function, and the preset optimization function to obtain a classifier model, including:
step S310: training the classifier according to the training sample set and the label information corresponding to each training video frame image and iterating the training times;
step S320: and performing back propagation and optimizing the network parameters of the classifier according to a preset second loss function and a preset optimization function until the iterative training times reach a preset second iterative times threshold value, and obtaining a classifier model.
Specifically, as shown in fig. 6, an optimized discriminator is used to construct a classifier, training is performed by using training video frame images and label information corresponding to each training video frame image as input, a loss value is calculated according to a preset second loss function, network parameters of the classifier are adjusted according to a preset optimization function, and iteration is performed in this way until the iterative training times reach a preset second iteration time threshold, so as to obtain an optimal classifier model.
In one embodiment, the preset second loss function is specifically:
Figure BDA0003163223800000111
wherein x [ class ] represents the label information of the training video frame image, and x [ j ] represents the sample distribution of the jth training video frame image.
The method and the device for identifying the abnormal human body behaviors comprise the steps of firstly obtaining training video data in a security inspection scene, performing framing pretreatment on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set and giving label information to the training video frame image as a training sample set, then designing a generator constructed by a generator residual block to optimize a discriminator constructed by the discriminator residual block, performing back propagation to adjust network parameters of the generator and the discriminator by combining a preset first loss function and a preset optimization function in a countermeasure game for generating a countermeasure network to obtain an optimal discriminator model, then constructing a classifier by using the optimal discriminator, performing back propagation to adjust the network parameters of the classifier by combining a preset second loss function and a preset optimization function in a training process to obtain an optimal classifier model for identifying the abnormal human body behaviors, the abnormal behavior of the human body under the security inspection scene can be rapidly and efficiently monitored and identified.
In one embodiment, the device for identifying the abnormal behavior of the human body comprises a preprocessing module, a generator discriminator model training module, a classifier model training module and an abnormal behavior identification module,
the system comprises a preprocessing module, a training sample set and a label information acquiring module, wherein the preprocessing module is used for acquiring training video data in a security check scene, performing frame preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set and giving label information to each training video frame image to serve as the training sample set;
the generator discriminator model training module is used for building a generator according to a first residual error network built by a generator residual block, building a discriminator according to a second residual error network built by a discriminator residual block, and training the generator and a generated countermeasure network built by the discriminator according to a training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model;
the classifier model training module is used for constructing a classifier according to the optimized discriminator model, and training the classifier according to the training sample set, the label information corresponding to each training video frame image, a preset second loss function and a preset optimization function to obtain a classifier model;
and the abnormal behavior identification module is used for acquiring video data in a security check scene, performing frame pre-processing on the video data to obtain a video frame image, and inputting the video frame image into the classifier model to obtain an abnormal behavior identification classification result.
For the specific limitation of the human body abnormal behavior recognition device, reference may be made to the above limitation on the human body abnormal behavior recognition method, and details are not described herein again. All or part of the modules in the human body abnormal behavior recognition device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The method and the device for identifying the abnormal human behavior provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the core concepts of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (10)

1. A method for identifying abnormal behaviors of a human body is characterized by comprising the following steps:
step S100: acquiring training video data in a security check scene, performing framing preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set, and giving label information to each training video frame image to serve as a training sample set;
step S200: building a generator according to a first residual network built by a generator residual block, building a discriminator according to a second residual network built by a discriminator residual block, and training a generator and a generated countermeasure network built by the discriminator according to the training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model;
step S300: constructing a classifier according to the optimized discriminator model, and training the classifier according to the training sample set, the label information corresponding to each training video frame image, a preset second loss function and a preset optimization function to obtain a classifier model;
step S400: the method comprises the steps of obtaining video data in a security check scene, carrying out frame preprocessing on the video data to obtain video frame images, inputting the video frame images into a classifier model, and obtaining abnormal behavior identification classification results.
2. The method according to claim 1, wherein the generator comprises a first input layer, a first fully connected layer, a first residual network and a first output layer, wherein the first residual network comprises a first generator residual block, a second generator residual block, a third generator residual block, a fourth generator residual block and a fifth generator residual block, and the first residual network constructed according to the generator residual blocks in step S200 constructs the generator specifically as follows:
the first input layer inputs noise with preset dimensionality, and the input is converted into 4 x 256 through the first full-connection layer;
connecting the first generator residual block, wherein the size of a convolution kernel in the first generator residual block is 3 × 3, the number of the convolution kernels is 256, the front transposition convolution step is 2, the step of the rear transposition convolution is 1, shortcut is transposition convolution with the size of the used convolution kernel being 1 × 1 and the step being 2;
connecting the second generator residual block, wherein the size of a convolution kernel in the second generator residual block is 3 × 3, the number of the convolution kernels is 128, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution using the convolution kernel with the size of 1 × 1 and the step size of 2;
connecting the third generator residual block, wherein the size of a convolution kernel in the third generator residual block is 3 × 3, the number of the convolution kernels is 64, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution with the size of the used convolution kernel being 1 × 1 and the step size being 2;
connecting the fourth generator residual block, wherein the size of a convolution kernel in the fourth generator residual block is 3 × 3, the number of the convolution kernels is 32, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution with the used convolution kernel size of 1 × 1 and the step size of 2;
connecting the fifth generator residual block, wherein the size of a convolution kernel in the fifth generator residual block is 3 × 3, the number of the convolution kernels is 16, the front transposition convolution step is 2, the step size of the rear transposition convolution is 1, shortcut is transposition convolution with the size of the used convolution kernel being 1 × 1 and the step size being 2;
concatenating the first output layer, the first output layer comprising a transposed convolution with a step size of 1, resulting in a 128 x 128 generated image, the first output layer using a Tanh activation function.
3. The method according to claim 2, wherein the discriminator comprises a second input layer, a second residual network, a first pooling layer and a second fully-connected layer, wherein the second residual network comprises a first discriminator residual block, a second discriminator residual block, a third discriminator residual block and a fourth discriminator residual block, and the constructing the discriminator according to the discriminator residual block in step S200 is specifically:
the second input layer inputs the training video frame image set and the generated image output by the generator, and comprises convolution with 4 x 4 and step length of 2, the convolution kernel number of the convolution is 32, and the second input layer uses LeakyRelu activation function;
connecting the first discriminator residual block, wherein the convolution kernel of the first discriminator residual block is 3 × 3, the number of the convolution kernels is 64, the front convolution step is 1, the step of the rear convolution is 2, short is the convolution with the used convolution kernel size of 1 × 1 and the step of 2;
connecting the second discriminator residual block, wherein the convolution kernel of the second discriminator residual block has a size of 3 × 3, the number of convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, shortcut is the convolution with the size of the used convolution kernel of 1 × 1 and the step of 2;
connecting the third discriminator residual block, wherein the convolution kernel of the third discriminator residual block has the size of 3 × 3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, short is the convolution with the size of 1 × 1 and the step of 2;
connecting the fourth discriminator residual block, wherein the convolution kernel of the fourth discriminator residual block is 3 × 3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, shortcut is the convolution with the used convolution kernel size of 1 × 1 and the step of 2;
connecting the first pooling layer, and using adaptive average pooling to enable the size of the output feature map to be 1;
and connecting the second full connection layer, outputting a one-dimensional discriminator result and using a sigmoid function.
4. The method according to claim 3, wherein training the generator and the generated countermeasure network constructed by the discriminator according to the training sample set, a preset first loss function and a preset optimization function in step S200 to obtain an optimized discriminator model comprises:
step S210: using noise of a preset dimension as the input of the generator to obtain a generated image;
step S220: and adding countermeasure training and iterative training times by taking the generated image and each training video frame image in the training sample set as the input of a discriminator, judging whether the discriminator can correctly judge whether the input is the training video frame image or the generated image, performing back propagation according to a preset first loss function and a preset optimization function, optimizing network parameters of a generator and the discriminator until the iterative training times reach a preset first iterative times threshold value, and finishing training to obtain an optimized discriminator model.
5. The method of claim 4, wherein the step of determining whether the discriminator can correctly determine whether the input is the training video frame image or the generated image in step S220 comprises:
when the discriminator can correctly judge whether the input is the training video frame image or the generated image, performing back propagation according to a preset first loss function and a preset optimization function, and returning to the step S210;
and when the discriminator cannot judge whether the input is the training video frame image or the generated image, performing back propagation according to a preset first loss function and a preset optimization function.
6. The method according to claim 4, wherein the predetermined first loss function is specifically:
loss=-wi[yilogxi+(1-yi)log(1-xi)
wherein, yiRepresents a theoretical label, wiRepresents a weight, xiRepresenting the actual predicted label.
7. The method according to claim 6, wherein the result of the classifier includes a third input layer, a first discriminator residual block, a second discriminator residual block, a third discriminator residual block, a fourth discriminator residual block, a second pooling layer and a third full-link layer, and the step S300 is to construct the classifier according to the optimized discriminator model, specifically:
inputting the label information corresponding to the training sample set and each training video frame image by the third input layer, wherein the third input layer comprises 4 × 4 convolution with the step length of 2, the number of convolution kernels is 32, and a LeakyRelu activation function is used;
connecting the first discriminator residual block, wherein the convolution kernel of the first discriminator residual block is 3 × 3, the number of the convolution kernels is 64, the front convolution step is 1, the step of the rear convolution is 2, short is the convolution with the used convolution kernel size of 1 × 1 and the step of 2;
connecting the second discriminator residual block, wherein the convolution kernel of the second discriminator residual block has a size of 3 × 3, the number of convolution kernels is 128, the front convolution step is 1, the step of the rear convolution is 2, shortcut is the convolution with the size of the used convolution kernel of 1 × 1 and the step of 2;
connecting the third discriminator residual block, wherein the convolution kernel of the third discriminator residual block has the size of 3 × 3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, short is the convolution with the size of 1 × 1 and the step of 2;
connecting the fourth discriminator residual block, wherein the convolution kernel of the fourth discriminator residual block is 3 × 3, the number of the convolution kernels is 256, the front convolution step is 1, the step of the rear convolution is 2, shortcut is the convolution with the used convolution kernel size of 1 × 1 and the step of 2;
connecting the second pooling layer, and using adaptive average pooling to enable the size of the output characteristic diagram to be 1;
outputting a one-dimensional classifier result through the third fully connected layer and using a Softmax function.
8. The method according to claim 7, wherein the training the classifier according to the training sample set, the label information corresponding to each of the training video frame images, a preset second loss function and a preset optimization function in step S300 to obtain a classifier model, includes:
step S310: training the classifier according to the training sample set and the label information corresponding to each training video frame image and iterating the training times;
step S320: and performing back propagation and optimizing the network parameters of the classifier according to a preset second loss function and a preset optimization function until the iterative training times reach a preset second iterative times threshold value, and obtaining a classifier model.
9. The method according to claim 8, wherein the predetermined second loss function is specifically:
Figure FDA0003163223790000041
wherein x [ class ] represents the label information of the training video frame image, and x [ j ] represents the sample distribution of the jth training video frame image.
10. An apparatus for recognizing abnormal human behavior, the apparatus comprising:
the system comprises a preprocessing module, a training sample set and a label information acquiring module, wherein the preprocessing module is used for acquiring training video data in a security check scene, performing frame-dividing preprocessing on the training video data to obtain a training video frame image set, classifying each training video frame image in the training video frame image set and giving label information to each training video frame image to serve as the training sample set;
the generator discriminator model training module is used for building a generator according to a first residual error network built by a generator residual block, building a discriminator according to a second residual error network built by a discriminator residual block, and training the generator and a generated countermeasure network built by the discriminator according to the training sample set, a preset first loss function and a preset optimization function to obtain an optimized discriminator model;
a classifier model training module, configured to construct a classifier according to the optimized discriminator model, and train the classifier according to the training sample set, the label information corresponding to each training video frame image, a preset second loss function, and a preset optimization function to obtain a classifier model;
and the abnormal behavior identification module is used for acquiring video data in a security check scene, performing framing preprocessing on the video data to obtain a video frame image, and inputting the video frame image into the classifier model to obtain an abnormal behavior identification classification result.
CN202110797329.5A 2021-07-14 2021-07-14 Method and device for identifying abnormal behaviors of human body Active CN113449679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110797329.5A CN113449679B (en) 2021-07-14 2021-07-14 Method and device for identifying abnormal behaviors of human body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110797329.5A CN113449679B (en) 2021-07-14 2021-07-14 Method and device for identifying abnormal behaviors of human body

Publications (2)

Publication Number Publication Date
CN113449679A true CN113449679A (en) 2021-09-28
CN113449679B CN113449679B (en) 2023-02-03

Family

ID=77816365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110797329.5A Active CN113449679B (en) 2021-07-14 2021-07-14 Method and device for identifying abnormal behaviors of human body

Country Status (1)

Country Link
CN (1) CN113449679B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935407A (en) * 2021-09-29 2022-01-14 光大科技有限公司 Abnormal behavior recognition model determining method and device
CN116383649A (en) * 2023-04-03 2023-07-04 山东省人工智能研究院 Electrocardiosignal enhancement method based on novel generation countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705376A (en) * 2019-09-11 2020-01-17 南京邮电大学 Abnormal behavior detection method based on generative countermeasure network
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN111340791A (en) * 2020-03-02 2020-06-26 浙江浙能技术研究院有限公司 Photovoltaic module unsupervised defect detection method based on GAN improved algorithm
CN112613494A (en) * 2020-11-19 2021-04-06 北京国网富达科技发展有限责任公司 Power line monitoring abnormity identification method and system based on deep countermeasure network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705376A (en) * 2019-09-11 2020-01-17 南京邮电大学 Abnormal behavior detection method based on generative countermeasure network
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN111340791A (en) * 2020-03-02 2020-06-26 浙江浙能技术研究院有限公司 Photovoltaic module unsupervised defect detection method based on GAN improved algorithm
CN112613494A (en) * 2020-11-19 2021-04-06 北京国网富达科技发展有限责任公司 Power line monitoring abnormity identification method and system based on deep countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAHDYAR RAVANBAKHSH,ET AL.: "Training Adversarial Discriminators for Cross-Channel Abnormal Event Detection in Crowds", 《2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)》 *
刘坤 等: "基于半监督生成对抗网络X光图像分类算法", 《光学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113935407A (en) * 2021-09-29 2022-01-14 光大科技有限公司 Abnormal behavior recognition model determining method and device
CN116383649A (en) * 2023-04-03 2023-07-04 山东省人工智能研究院 Electrocardiosignal enhancement method based on novel generation countermeasure network
CN116383649B (en) * 2023-04-03 2024-01-23 山东省人工智能研究院 Electrocardiosignal enhancement method based on novel generation countermeasure network

Also Published As

Publication number Publication date
CN113449679B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN112990432B (en) Target recognition model training method and device and electronic equipment
CN113449679B (en) Method and device for identifying abnormal behaviors of human body
CN112699786B (en) Video behavior identification method and system based on space enhancement module
CN112036513B (en) Image anomaly detection method based on memory-enhanced potential spatial autoregression
CN114155244B (en) Defect detection method, device, equipment and storage medium
CN110175248B (en) Face image retrieval method and device based on deep learning and Hash coding
CN115526891B (en) Training method and related device for defect data set generation model
CN112766040A (en) Method, device and apparatus for detecting residual bait and readable storage medium
CN110503152B (en) Two-way neural network training method and image processing method for target detection
CN114463727A (en) Subway driver behavior identification method
CN115239672A (en) Defect detection method and device, equipment and storage medium
CN115115924A (en) Concrete image crack type rapid intelligent identification method based on IR7-EC network
CN114037001A (en) Mechanical pump small sample fault diagnosis method based on WGAN-GP-C and metric learning
CN112818840A (en) Unmanned aerial vehicle online detection system and method
CN112232948A (en) Method and device for detecting abnormality of flow data
CN112052829A (en) Pilot behavior monitoring method based on deep learning
CN115601818B (en) Lightweight visible light living body detection method and device
CN113298004B (en) Lightweight multi-head age estimation method based on face feature learning
CN113239865A (en) Deep learning-based lane line detection method
CN115424250A (en) License plate recognition method and device
CN112712550A (en) Image quality evaluation method and device
CN113538199B (en) Image steganography detection method based on multi-layer perception convolution and channel weighting
CN115100419B (en) Target detection method and device, electronic equipment and storage medium
CN117710755B (en) Vehicle attribute identification system and method based on deep learning
CN114821203B (en) Fine-grained image model training and identifying method and device based on consistency loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant