CN115294519A - Abnormal event detection and early warning method based on lightweight network - Google Patents

Abnormal event detection and early warning method based on lightweight network Download PDF

Info

Publication number
CN115294519A
CN115294519A CN202210870151.7A CN202210870151A CN115294519A CN 115294519 A CN115294519 A CN 115294519A CN 202210870151 A CN202210870151 A CN 202210870151A CN 115294519 A CN115294519 A CN 115294519A
Authority
CN
China
Prior art keywords
abnormal event
detection
event
abnormal
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210870151.7A
Other languages
Chinese (zh)
Inventor
杨彤
李雪
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202210870151.7A priority Critical patent/CN115294519A/en
Publication of CN115294519A publication Critical patent/CN115294519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides an abnormal event detection and early warning method based on a lightweight network, which belongs to the technical field of video behavior analysis and comprises the following steps: constructing an initial abnormal event data set, and expanding the data set by using a data enhancement technology; constructing an abnormal event detection network based on Mobile-CenterNet, configuring training parameters and training a detection model; acquiring monitoring video data, inputting interval frames into a pre-trained detection model for detection, and outputting a detection result; and analyzing the image detection result, confirming the type, position and grade of the event, giving an alarm, and coordinating a plurality of cameras to record and track the abnormal event. The method provided by the invention has the advantages of high event identification precision, real-time detection, easiness in deployment, adaptability to more complex scenes, contribution to timely responding and rescuing of related departments, reduction of personal casualties and property loss of the masses and improvement of the safety management efficiency of public areas.

Description

Abnormal event detection and early warning method based on lightweight network
Technical Field
The invention relates to the technical field of video behavior analysis, in particular to an abnormal event detection and early warning method based on a lightweight network.
Background
In order to strengthen the management and restriction on public areas, various systems are installed in public areas of most cities, and play a certain role. At present, a relevant department can check and record the field condition of a corresponding area by using a visible intelligent camera, but a large amount of manpower and material resources are consumed, and a plurality of systems are in a mode of 'only recording and not judging', so that the system has no prediction effect on abnormal events and can only serve as a post-query tool in more time.
By extracting and analyzing the information of the abnormal events in the video, the types and the occurrence positions of the abnormal events can be detected, identified and positioned timely and accurately; after the abnormal event occurs, relevant departments respond in time and carry out rescue, and the occurrence of the abnormal event is restrained as far as possible from the source.
At present, many experts and scholars apply a deep learning technology to abnormal event detection, features obtained based on a neural network have stronger semantic features and distinguishing capability, the detection accuracy is greatly improved, but higher requirements are provided for the computing capability and the memory space of a computing platform, and the deployment and use of an algorithm on intelligent terminal equipment are limited.
Disclosure of Invention
In order to solve the technical problems, the invention provides the abnormal event detection and early warning method based on the light weight neural network, which has the advantages of real-time detection, easy deployment, high event identification precision and capability of adapting to more complex scenes. The method greatly reduces the number of model parameters, improves the detection speed and enables the algorithm to operate on the embedded processor quickly and accurately.
The technical scheme of the invention is as follows:
an abnormal event detection and early warning method based on a lightweight network,
the method comprises the following steps:
constructing an initial abnormal event data set, and expanding the data set by using a data enhancement technology;
constructing an abnormal event detection network based on Mobile-CenterNet, configuring training parameters and training a detection model;
acquiring monitoring video data, inputting interval frames into a pre-trained detection model for detection, and outputting a detection result;
and analyzing the image detection result, confirming the type, position and grade of the event, giving an alarm, and coordinating the camera to record and track the abnormal event.
Further, in the above-mentioned case,
the method comprises the following specific steps:
s1, constructing an abnormal event data set;
s2, building a Mobile-CenterNet abnormal event detection network, configuring training parameters, and training a detection model to obtain a detection model;
s3, acquiring monitoring video data, and intercepting video frames at intervals to obtain required image data; inputting the detection model trained in advance in the step S2 to perform abnormal event identification detection, outputting detection result information, and outputting position information, confidence coefficient and category of the detection frame;
s4, carrying out grade judgment and classification on the abnormal event picture frame in the step S3 through an abnormal event classifier;
s5, processing results and early warning; setting an abnormal event alarm threshold value to be 0.6, and judging whether the abnormal event alarm threshold value is an alarm target and an early warning grade according to the event confidence degrees detected in the steps S3 and S4; when the confidence of the target is higher than the threshold value, sending alarm information, sending the type, grade and position information of the abnormal event to related workers, and making a countermeasure; if no dangerous event is found, continuing to analyze the next frame of picture;
s6, after the abnormal event is early-warned, coordinating a plurality of cameras to track the identified abnormal event in a visual range, recording the occurrence process of the abnormal event in a multi-angle mode, and predicting the occurrence track;
s7, when new abnormal detection data are met, judging abnormal events but not judging types, recording new data, reconstructing a network structure and training a new model;
and S8, repeating the steps from S3 to S6, and continuously carrying out real-time analysis on the video stream.
In a still further aspect of the present invention,
in step S1, the data set is from a network public data set and a network video screenshot, and the original image is translated, rotated and turned over by using a data enhancement technology to expand the data set.
In step S2, firstly, a 4-fold downsampling preprocessing is performed on the input image to reduce the image resolution; then, extracting features through a backbone network MobileNet;
an attention multi-scale feature fusion module SENet is added behind the backbone network;
when the heat map is converged in model training, adding a multi-core filter for post-processing at the branch output of the heat map, traversing each key point by using m (m is greater than 1) kernel filters with different sizes, and if the maximum value in a kernel range is not equal to the extreme value of the current value, setting the current value to be 0; and finally, m maximum filtering results are obtained, so that the local peak value can always keep the maximum value.
In step S4, the classification criteria include: whether the event is a violent event or not, and the number of persons participating in the event;
and counting the number of pedestrians, visually outputting and displaying n, and dividing the classifier into single, double and multi-person events according to output crowds.
In step S5, when the real-time video stream detection result is compared and analyzed with the sample library, if the confidence of the abnormal event is found to exceed the alarm threshold, an alarm message is sent, and the category, level and position information of the abnormal event is sent to the relevant staff, and the staff takes measures according to the alarm message.
In step S6, comprehensive judgment is performed by using videos from more than one angle, and motion characteristics of targets in different cameras at the same time are calculated on the basis of time synchronization, so as to predict a motion trajectory.
In step S7, when a new abnormal detection category is encountered, the event is determined to be an abnormal event, but the category cannot be determined, the network records new data, and continues to add a sample set at a later stage, reconstruct a network structure, and train a new model.
The invention has the advantages that
The high real-time performance and the easy deployment performance of the CenterNet and the lightweight network MobileNet are fully utilized, the high generalization capability is achieved in scenes with low resolution, mixed characters and complex behaviors like a monitoring video, abnormal events in a monitoring system are judged more accurately and early warning is given out timely, video monitoring personnel are liberated from heavy monitoring and patrolling work, and the efficiency of safety management of public areas is greatly improved.
Drawings
FIG. 1 is a flow chart of the working steps of the present invention;
FIG. 2 is a diagram of a Mobile-CenterNet network architecture for use with the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, it is obvious that the described embodiments are some, but not all embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
As shown in fig. 1, the method for detecting video abnormal events based on a lightweight network of the present invention includes the following steps:
s1, constructing an abnormal event data set;
s2, building a Mobile-CenterNet abnormal event detection network, configuring training parameters, and training a detection model to obtain a detection model;
s3, acquiring monitoring video data, and intercepting video frames at intervals to obtain required image data; and (3) inputting the detection model trained in advance in the step (S2) to perform abnormal event identification detection, outputting detection result information, and outputting position information, confidence coefficient and category of the detection frame.
S4, performing grade judgment and classification on the abnormal event picture frame in the S3 through an abnormal event classifier;
s5, processing results and early warning; setting an abnormal event alarm threshold value to be 0.6, and judging whether the abnormal event alarm threshold value is an alarm target and an early warning grade according to the event confidence degrees detected in the steps S3 and S4; when the confidence of the target is higher than a threshold value, sending alarm information, sending the type, the grade and the position information of the abnormal event to related workers, and taking corresponding measures; and if no dangerous event is found, continuously analyzing the next frame of picture.
And S6, after the abnormal event is early-warned, coordinating a plurality of cameras to track the identified abnormal event in a visual range, recording the occurrence process of the abnormal event in a multi-angle mode, and predicting the occurrence track.
And S7, when new abnormal detection data are met, judging the abnormal event but not judging the type, recording new data, reconstructing a network structure and training a new model.
And S8, repeating the steps from S3 to S6, and continuously carrying out real-time analysis on the video stream.
In the step S1, the data set is from a network public data set and a network video screenshot, and the original image is translated, rotated and turned over by using a data enhancement technology to expand the data set;
in step S2, the established Mobile-centret abnormal event detection model determines the target by finding the center point, width and height of the object to be recognized based on the original centret model and using the idea of "point is target". Backbone network selection
In the step S2, the selected CenterNet is an Anchor-free algorithm, and the method does not use NMS and a prior frame, so that the whole detection frame can be simplified to a great extent, and the training and reasoning prediction time is greatly reduced; the main network is a MobileNet, a lightweight convolutional neural network which is concentrated in a mobile terminal or an embedded device, and has high precision, low calculation cost and few parameters;
firstly, carrying out 4-time downsampling pretreatment on an input image to reduce the resolution of the image; and then, performing feature extraction through an improved backbone network MobileNet, firstly, inputting an image into a network to obtain a thermodynamic diagram of 1 key point, and taking a peak point of the thermodynamic diagram as a central point p (x ', y') of a target. Then, the local offset (o) is predicted from the center point back and forth x ,o y ) And the length and width (w, h) of the bounding box. Finally, the network model outputs 5 data (in totalx, y, w, h, c), wherein x = x' + o) x y=y′+o y (ii) a Respectively representing the x and y coordinates of the central point of the prediction frame on an image coordinate system, the width and height of the rectangle, height and confidence coefficient. The network gives visual prediction results to the input images, and corresponding event prediction boxes and confidence percentages are represented on the original images. And after passing through the level classification network, increasing and displaying the abnormal level.
The loss function of the model is defined by a keypoint confidence loss function L k Target bounding box size loss function L size And an offset penalty function L off Linear fusion composition: l is det =L ksize L sizeoff L off ;λ size ,λ off Is L size And L off The parameter coefficients of (2).
Figure RE-GDA0003880418000000061
In the formula: the subscript k denotes the kth input image, N is the number of keypoints in the image, alpha and beta are hyper-parameters,
Figure RE-GDA0003880418000000062
is a predicted value, Y xyc Is the true tag value; s k The size of the original target frame is the size of the target frame,
Figure RE-GDA0003880418000000063
the regression target frame size is obtained; p is the center point of the target frame, R is the down-sampling factor,
Figure RE-GDA0003880418000000064
in order to be able to predict the offset,
Figure RE-GDA0003880418000000065
coordinates of the center point predicted for the model.
In step S2, an attention multi-scale feature fusion module SENet is added behind the backbone network, and the module has a simple structure and is easy to deploy, and does not need to introduce a new function or convolution layer. The output characteristics of the backbone network are firstly compressed along the space dimension by using global average pooling, and each two-dimensional characteristic channel is changed into a real number with a global receptive field, so that a layer close to the input obtains the global receptive field. And then modeling the correlation between channels by using two full-connection layers, wherein the characteristic dimension is reduced by the first full-connection layer, and then the characteristic dimension is increased back to the original dimension by one full-connection layer after ReLu activation, so that the output has more nonlinearity to fit the complex correlation between the channels. And finally, obtaining a normalization weight by Sigmoid through Scale and adding the normalization weight to the characteristics of each channel.
In step S2, when the heat map is converged in model training, after a peak is detected through maximum pooling, multiple poles may be generated, a multi-core filtering is added to the branch output of the heat map, m kernel filters of different sizes are used to traverse each key point, and if the maximum value in the kernel range is not equal to the current extreme value, the current value is set to 0; and finally, m maximum filtering results are obtained, so that the local peak value can always keep the maximum value.
In step S4, an abnormal event classifier is designed, and the classification criteria include: whether it is a violent event or not and the number of persons participating in the event. Detected abnormal events are classified as violent and non-violent, classified by rank: primary (violent) and secondary (non-violent) anomalous events. Violent abnormal events such as group blows, trampling, stealing, invasion of strangers and the like; non-violent abnormal events include falling, climbing, chasing and the like. Statistics on the number of pedestrians in the prediction frame is added in the model, n is displayed in a visual output mode, and the classifier is divided into single-person, double-person and multi-person events according to output crowds.
In step S5, when the real-time video stream detection result is compared and analyzed with the sample library, if the confidence of the abnormal event is found to exceed the alarm threshold, an alarm message is sent, and the category, level and position information of the abnormal event is sent to the relevant staff, and the staff takes measures according to the alarm message.
In step S6, the determination of the abnormal event may yield different results in consideration of the observation from different angles. It may be that one view is normal, and another view is abnormal, and it is necessary to use videos from multiple angles to perform comprehensive judgment, and on the basis of time synchronization, calculate motion characteristics, including position, motion direction, speed, etc., of objects in different cameras at the same time, so as to predict a motion trajectory.
In step S7, when a new abnormal detection category is encountered, the model may determine that the event is an abnormal event, but cannot determine the category, the network may record new data, and continue to add a sample set in a later stage, reconstruct a network structure, and train a new model.
The above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (8)

1. An abnormal event detection and early warning method based on a lightweight network is characterized in that,
the method comprises the following steps:
constructing an initial abnormal event data set, and expanding the data set by using a data enhancement technology;
constructing an abnormal event detection network based on Mobile-CenterNet, configuring training parameters and training a detection model;
acquiring monitoring video data, inputting interval frames into a pre-trained detection model for detection, and outputting a detection result;
and analyzing the image detection result, confirming the type, position and grade of the event, giving an alarm, and coordinating the camera to record and track the abnormal event.
2. The method of claim 1,
the method comprises the following specific steps:
s1, constructing an abnormal event data set;
s2, building a Mobile-CenterNet abnormal event detection network, configuring training parameters, and training a detection model to obtain a detection model;
s3, acquiring monitoring video data, and intercepting video frames at intervals to obtain required image data; inputting the detection model trained in advance in the step S2 to perform abnormal event identification detection, outputting detection result information, and outputting position information, confidence coefficient and category of the detection frame;
s4, carrying out grade judgment and classification on the abnormal event picture frame in the step S3 through an abnormal event classifier;
s5, result processing and early warning; setting an abnormal event alarm threshold value to be 0.6, and judging whether the abnormal event alarm threshold value is an alarm target and an early warning grade according to the event confidence degrees detected in the steps S3 and S4; when the confidence of the target is higher than the threshold value, sending alarm information, sending the type, grade and position information of the abnormal event to related workers, and making a countermeasure; if no dangerous event is found, continuing to analyze the next frame of picture;
s6, after the abnormal event is early-warned, coordinating a plurality of cameras to track the identified abnormal event in a visual range, recording the occurrence process of the abnormal event in a multi-angle mode, and predicting the occurrence track;
s7, when new abnormal detection data are met, judging abnormal events but not judging types, recording new data, reconstructing a network structure and training a new model;
and S8, repeating the steps from S3 to S6, and continuously carrying out real-time analysis on the video stream.
3. The method of claim 2,
in step S1, the data set is from a network public data set and a network video screenshot, and the original image is translated, rotated and turned over by using a data enhancement technology to expand the data set.
4. The method of claim 2,
in step S2, firstly, a 4-fold downsampling preprocessing is performed on the input image to reduce the image resolution; then, extracting features through a backbone network MobileNet;
an attention multi-scale feature fusion module SENet is added behind the backbone network;
when the heat map is converged in model training, adding a multi-kernel filter to the branch output of the heat map for post-processing, traversing each key point by using m (m is greater than 1) kernel filters with different sizes, and if the maximum value in the kernel range is not equal to the extreme value of the current value, making the current value be 0; and finally, m maximum filtering results are obtained, so that the local peak value can always keep the maximum value.
5. The method of claim 2,
in step S4, the classification criteria include: whether the event is a violent event or not and the number of people participating in the event;
and counting the number of pedestrians, displaying n through visual output, and dividing the classifier into single, double and multi-person events according to output crowds.
6. The method of claim 2,
in step S5, when the real-time video stream detection result is compared and analyzed with the sample library, if the confidence of the abnormal event is found to exceed the alarm threshold, an alarm message is sent, and the category, level and position information of the abnormal event is sent to the relevant staff, and the staff takes measures according to the alarm message.
7. The method of claim 2,
in step S6, comprehensive judgment is performed by using videos from more than one angle, and motion characteristics of targets in different cameras at the same time are calculated on the basis of time synchronization, so as to predict a motion trajectory.
8. The method of claim 2,
in step S7, when a new abnormal detection category is encountered, the event is determined to be an abnormal event, but the category cannot be determined, the network records new data, and continues to add a sample set at a later stage, reconstruct a network structure, and train a new model.
CN202210870151.7A 2022-07-22 2022-07-22 Abnormal event detection and early warning method based on lightweight network Pending CN115294519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210870151.7A CN115294519A (en) 2022-07-22 2022-07-22 Abnormal event detection and early warning method based on lightweight network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210870151.7A CN115294519A (en) 2022-07-22 2022-07-22 Abnormal event detection and early warning method based on lightweight network

Publications (1)

Publication Number Publication Date
CN115294519A true CN115294519A (en) 2022-11-04

Family

ID=83824660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210870151.7A Pending CN115294519A (en) 2022-07-22 2022-07-22 Abnormal event detection and early warning method based on lightweight network

Country Status (1)

Country Link
CN (1) CN115294519A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746322A (en) * 2023-12-15 2024-03-22 武汉展博人工环境有限公司 Amusement facility safety early warning method and system based on image recognition
CN117787671A (en) * 2024-02-28 2024-03-29 陕西跃途警用装备制造有限公司 Police integrated system based on video monitoring and intelligent patrol
CN118101904A (en) * 2024-04-28 2024-05-28 东方空间(江苏)航天动力有限公司 Sea-shooting rocket video monitoring method and monitoring device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746322A (en) * 2023-12-15 2024-03-22 武汉展博人工环境有限公司 Amusement facility safety early warning method and system based on image recognition
CN117787671A (en) * 2024-02-28 2024-03-29 陕西跃途警用装备制造有限公司 Police integrated system based on video monitoring and intelligent patrol
CN117787671B (en) * 2024-02-28 2024-05-17 陕西跃途警用装备制造有限公司 Police integrated system based on video monitoring and intelligent patrol
CN118101904A (en) * 2024-04-28 2024-05-28 东方空间(江苏)航天动力有限公司 Sea-shooting rocket video monitoring method and monitoring device

Similar Documents

Publication Publication Date Title
Aboah A vision-based system for traffic anomaly detection using deep learning and decision trees
CN109819208B (en) Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN102163290B (en) Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
WO2017122258A1 (en) Congestion-state-monitoring system
CN111488804A (en) Labor insurance product wearing condition detection and identity identification method based on deep learning
Lazaridis et al. Abnormal behavior detection in crowded scenes using density heatmaps and optical flow
CN110188807A (en) Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN
KR102035592B1 (en) A supporting system and method that assist partial inspections of suspicious objects in cctv video streams by using multi-level object recognition technology to reduce workload of human-eye based inspectors
CN103986910A (en) Method and system for passenger flow statistics based on cameras with intelligent analysis function
CN111738218B (en) Human body abnormal behavior recognition system and method
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN103902966A (en) Video interaction event analysis method and device base on sequence space-time cube characteristics
CN111402298A (en) Grain depot video data compression method based on target detection and trajectory analysis
CN113569766B (en) Pedestrian abnormal behavior detection method for patrol of unmanned aerial vehicle
CN114202803A (en) Multi-stage human body abnormal action detection method based on residual error network
CN115223246A (en) Personnel violation identification method, device, equipment and storage medium
Khan et al. Comparative study of various crowd detection and classification methods for safety control system
Kumar Crowd behavior monitoring and analysis in surveillance applications: a survey
CN117011774A (en) Cluster behavior understanding system based on scene knowledge element
CN111160150A (en) Video monitoring crowd behavior identification method based on depth residual error neural network convolution
Katariya et al. A pov-based highway vehicle trajectory dataset and prediction architecture
CN116152745A (en) Smoking behavior detection method, device, equipment and storage medium
CN115909144A (en) Method and system for detecting abnormity of surveillance video based on counterstudy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination