CN111914765B - Service area environment comfort level detection method and device and readable storage medium - Google Patents

Service area environment comfort level detection method and device and readable storage medium Download PDF

Info

Publication number
CN111914765B
CN111914765B CN202010779020.9A CN202010779020A CN111914765B CN 111914765 B CN111914765 B CN 111914765B CN 202010779020 A CN202010779020 A CN 202010779020A CN 111914765 B CN111914765 B CN 111914765B
Authority
CN
China
Prior art keywords
layer
service area
network
comfort level
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010779020.9A
Other languages
Chinese (zh)
Other versions
CN111914765A (en
Inventor
李晓春
邵奇可
吴狄娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pixel Technology Co ltd
Original Assignee
Hangzhou Pixel Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pixel Technology Co ltd filed Critical Hangzhou Pixel Technology Co ltd
Priority to CN202010779020.9A priority Critical patent/CN111914765B/en
Publication of CN111914765A publication Critical patent/CN111914765A/en
Application granted granted Critical
Publication of CN111914765B publication Critical patent/CN111914765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, equipment and a readable storage medium for detecting environmental comfort level of a service area, wherein the method comprises the following steps: acquiring a service area environment comfort level detection image data set, calibrating the data set of pedestrians in a detection area, and determining a used one-stage target detection algorithm model; constructing a feature extraction upper sampling layer based on a scaling coefficient; constructing an upper sampling layer based on partial/integral region feature extraction; model training: and (3) the trained model is used for detecting and identifying the number of pedestrians in the detection area of the expressway service area in real time in an actual system, and then the environmental comfort level of the service area is calculated. The method has the advantages that the overall appearance and partial details of the target object are obtained simultaneously, the characteristics of all target areas and partial detail areas can be extracted in the up-sampling process, and the target detection precision of the model is further improved.

Description

Service area environment comfort level detection method and device and readable storage medium
Technical Field
The invention relates to the technical field of image recognition and computer vision, in particular to a service area environment comfort level detection method, equipment and a readable storage medium.
Background
With the promotion of highway construction pace and the improvement of people living standard, the requirements of road users on highway customization, comfort and the like are higher and higher, so that the users have more and more appeal on a large number of construction service areas. In the process, the construction of the expressway service area experiences the development process from inexhaustibility to the past by using foreign experiences to the present construction of a characteristic service area in China. The service area is an indispensable facility of the highway, so that the basic requirements of drivers can be guaranteed, the driving safety of vehicles is improved, the transportation efficiency is improved, and the economic benefit of the highway is furthest exerted; the service area can provide services for users on the highway, and is very important for guaranteeing driving safety, guaranteeing transportation efficiency, and relieving physiological over-fatigue of drivers and limit conditions of vehicle use. With the continuous extension of the highway driving mileage and the continuous increase of the passenger flow volume, the service area is inevitably required to increase the types, the number and the like of service items, so that a special economic area is formed, and the importance of the industrial value is increasingly paid attention. With the rapid increase of the passenger flow in the service area, especially during holidays, a large number of tourists rush into the service area, and huge examination is brought to each work in the service area. Therefore, the passenger flow and environmental comfort data of the service area are mastered in time, reasonable measures are taken in time according to data changes, the passenger flow and the traffic flow are controlled, the optimal and most convenient service is provided for people to go out, and meanwhile, the working pressure of the service area is greatly reduced.
The environmental comfort level is calculated based on the crowd density grade, and is based on the computer vision and pattern recognition technology, and the quantization grade of the crowd density is obtained by analyzing and calculating the monitoring image or video. The environmental comfort level is a powerful basis for large-scale crowd monitoring management, and can provide crowd density information distributed according to different time periods in a market or a retail outlet to assist management layer in distributing service and managing resources. The system can be widely applied to monitoring of access passages of facilities such as bus stations, passenger transport stations, railway stations, airports and the like and people in important areas, accurate data of the number and distribution of passengers can be obtained in real time, potential safety hazards caused by crowding of customers can be prevented, and a basis is provided for scientific scheduling and safety guarantee. When crowd density is quantified as density classes, crowd density estimates translate into multi-class classification problems: the characteristic quantity related to the crowd is input, and the density level is output, namely the environmental comfort level. Common methods for solving the multi-class classification problem include polynomial fitting methods, recent clustering partitioning methods, and the like. Polynomial fitting methods often require certain assumptions about the functional relationship between input and output, and performance suffers when the assumptions do not match reality. The nearest clustering division method is to cluster the training samples with the same density grade in a feature space to form a plurality of representative points, and for a sample of an unknown class, the class of the sample is determined according to the nearest neighbor representative point or the class of the nearest neighbor representative point; the method is simple and easy to implement, but has high dependence on parameter selection, and performance is reduced or even overfitting is generated when the parameters are not properly selected.
The application of the current convolutional neural network algorithm based on video images in the field of image processing is gradually increased, and the convolutional neural network algorithm has the advantages of automatic image feature extraction, strong robustness and the like; however, the accuracy and real-time performance of the identification algorithm are high when the video stream of the camera is used for detecting and identifying the crowd density in the service area. Therefore, the target detection algorithm based on deep learning is reasonable. The target detection algorithm based on deep learning is divided into a two-stage model and a one-stage model. Although the two-stage convolutional neural network model has better detection accuracy, the forward reasoning speed is slow, and the real-time requirement of a service scene cannot be met. In a traditional one-stage target detection algorithm model, the real-time performance of the algorithm is good, but the detection precision of the two-stage convolutional neural network model cannot be achieved due to the fact that people are generally shielded mutually. The service area environment comfort level blockage detection method based on optimization of mutual crowd shading is beneficial to improving the detection precision of the system and ensuring that the real-time performance of the system meets the requirements of application scenes.
Disclosure of Invention
The method optimizes the upsampling process in a one-stage target detection algorithm model; the target characteristics are extracted in the deep learning up-sampling process, so that the training result of the convolutional neural network is directly influenced; the training result of the convolutional neural network directly affects the crowd density accuracy of the service area, so the design of the up-sampling process of the deep neural network with comfortable environment in the service area is particularly important.
The situation that a large number of people are shielded from each other exists in the service area environment comfort level detection training data set, for example, the left side of one person is shielded by other people, and then the target characteristic information brought by the right side of the person is more credible; it is because people are partly sheltered from and lead to the inaccuracy that target detected, and then influence the accuracy of environment comfort level index. Therefore, the invention provides a method for decomposing a target area into a plurality of small areas aiming at the problem of detection precision degradation caused by low target feature resolution due to shielding, so that the whole appearance and partial details of a target object can be simultaneously obtained, the features of all the target area and partial detail area can be extracted in the up-sampling process, and the target detection precision of the model is further improved.
In order to solve the technical problem, an up-sampling optimization strategy algorithm based on partial scaling factors and partial region features is adopted, and the specific technical scheme is as follows:
a service area environmental comfort level detection method, the method comprising the steps of:
1) constructing a service area environment comfort degree data set N, a training data set T, a verification data set V, a training data batch size batch, training batch numbers batches, a learning rate learninglate, a proportionality coefficient alpha between the training data set T and the verification data set V,
Figure GDA0003655824700000021
Figure GDA0003655824700000022
wherein: v ^ T ═ N, C ∈ N+,α∈(0,1),batches∈N+,learningrate∈N+,batch∈N+
Figure GDA0003655824700000031
Middle hkAnd wkRepresenting the height and width of the image, and r represents the number of channels of the image;
2) determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure GDA0003655824700000032
representing the kth characteristic diagram in the l-th network
Figure GDA0003655824700000033
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure GDA0003655824700000034
Figure GDA0003655824700000035
Figure GDA0003655824700000036
Figure GDA0003655824700000037
Figure GDA0003655824700000038
wherein:
Figure GDA0003655824700000039
respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the layer I network,
Figure GDA00036558247000000310
indicating the fill size of the layer l network convolution kernel,
Figure GDA00036558247000000311
representing convolution step length of the layer I network, f representing excitation function of convolution neuron, theta representing selected input characteristic, and lambda epsilon N+Denotes the total number of anchor points xi E N in the layer I network+Represents the total number of nodes of the output layer, and belongs to N+Indicates the total number of the layer I network characteristic diagrams, and is Delta epsilon N+Represents the total number of the l layer convolution kernels;
3) designing a partial/whole region feature and target feature extraction model:
the region extraction adopts a scaling parameter beta to divide and extract the whole region to form a partial region, P is a region based on the scaling parameter beta, P is (x, y, w.beta, h.beta), simultaneously, a target object partial region Q is divided into an upper part, a lower part, a left part, a right part, Q ∈ { U, D, L, R },
Figure GDA00036558247000000312
wherein l is the { convolution layer 1, convolution layer 2}, r is the { zoom region P, target object partial region Q },
Figure GDA00036558247000000313
is a deviation factor; f (-) is the ReLU modified linear function unit, is the convolution operation,
Figure GDA0003655824700000041
is the convolution kernel of the l-th layer.
Figure GDA0003655824700000042
Figure GDA0003655824700000043
Representing the global features finally output by the up-sampling layer;
4) and outputting the number of pedestrians in the video detection area of the current service area through the trained network model, and calculating the ratio of the number of pedestrians to the area of the detection area to obtain the current environmental comfort level.
An electronic device, comprising: the service area environment comfort level detection method comprises a memory and a processor, wherein the memory and the processor are mutually connected in a communication mode, computer instructions are stored in the memory, and the processor executes the computer instructions so as to execute the service area environment comfort level detection method.
A computer-readable storage medium having stored thereon computer instructions for causing the computer to execute the service area environmental comfort level detection method as described above.
The invention has the beneficial effects that: the method has the advantages that the overall appearance and partial details of the target object are obtained simultaneously, the characteristics of all target areas and partial detail areas can be extracted in the up-sampling process, and the target detection precision of the model is further improved.
Drawings
FIG. 1 is a diagram of a network architecture;
FIG. 2 is a schematic diagram of a partial/whole region feature and target feature extraction model;
fig. 3 is a flow chart of detection method deployment.
Detailed Description
The technical solution in the embodiments of the present invention is clearly and completely described below with reference to the drawings in the embodiments of the present invention.
According to the method for detecting the environmental comfort level of the service area, the sampling optimization strategy algorithm based on partial scaling factors and partial area features comprises the following steps:
step 1: the method comprises the steps of constructing 20000 data sets N by using pictures shot by a service area monitoring camera, wherein the number of training data sets T is 16000, the number of verification data sets V is 4000, the learning rate learningate value of 2 GPU video cards in a training hardware environment is 0.005, the value of a proportionality coefficient alpha between the training data sets T and the verification data sets V is 0.25, and the height h of the pictures is 0.25k=608,wkR 608, r is 3 and satisfies that the height, width, and number of channels of all images are set to be consistent.
And 2, step: determining the one-stage target detection model as YoloV3, setting the depth L of the convolutional neural network as 139, wherein the height, width and dimension settings of the convolutional kernel are specifically shown in FIG. 1, and the filling size of the convolutional kernel
Figure GDA0003655824700000051
Default is 1, convolution step size
Figure GDA0003655824700000052
Defaults to 1; anchor points are shared in each layer network, and the anchor point set M is set to be { (10,13), (30,61), (156,198) }, and Λ is 3; the network output layer adopts a full connection mode, the value of a convolution kernel set A is { (1,1,30),(1,1,30),(1,1,30)},Ξ=3。
and step 3: as shown in fig. 2, the scaling parameters are β ═ 0.5,0.7,1.2,1.5], model training is performed on the upsampling layer 1 and the upsampling layer 2 based on the partial/whole region features and the target features, and the model is trained by using the gradient descent method using the training set until the model converges.
And 4, step 4: as shown in fig. 3, real-time detection is performed according to the video stream of the expressway service area, the number of target detection pedestrians in the current video monitoring area is output, and the environmental comfort level parameter is calculated.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A service area environmental comfort level detection method, characterized in that the method comprises the following steps:
1) constructing a service area environment comfort degree data set N, a training data set T, a verification data set V, a training data batch size batch, training batch numbers batches, a learning rate learninglate, a proportionality coefficient alpha between the training data set T and the verification data set V,
Figure FDA0003655824690000011
Figure FDA0003655824690000012
wherein: v ^ T ═ N, C ∈ N+,α∈(0,1),batches∈N+,learningrate∈N+,batch∈N+
Figure FDA0003655824690000013
Middle hkAnd wkRepresenting the height and width of the image, and r represents the number of channels of the image;
2) determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure FDA0003655824690000014
representing the kth characteristic diagram in the l-th network
Figure FDA0003655824690000015
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure FDA0003655824690000016
Figure FDA0003655824690000017
Figure FDA0003655824690000018
Figure FDA0003655824690000019
Figure FDA00036558246900000110
wherein:
Figure FDA00036558246900000111
respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the layer I network,
Figure FDA00036558246900000112
indicating the fill size of the layer l network convolution kernel,
Figure FDA00036558246900000113
representing convolution step length of the layer I network, f representing excitation function of convolution neuron, theta representing selected input characteristic, and lambda epsilon N+Denotes the total number of anchor points in the layer network, xi E N+Represents the total number of nodes in the output layer, phi ∈ N+Indicates the total number of the layer I network characteristic diagrams, and is Delta epsilon N+Represents the total number of the l layer convolution kernels;
3) designing a partial/whole region feature and target feature extraction model:
the region extraction adopts a scaling parameter beta to divide and extract the whole region to form a partial region, P is a region based on the scaling parameter beta, P is (x, y, w.beta, h.beta), simultaneously, a target object partial region Q is divided into an upper part, a lower part, a left part, a right part, Q ∈ { U, D, L, R },
Figure FDA0003655824690000021
wherein l belongs to { convolution layer 1, convolution layer 2}, r belongs to { scaling region P, target object partial region Q },
Figure FDA0003655824690000022
is a deviation factor; f (-) is the ReLU modified linear function unit, is the convolution operation,
Figure FDA0003655824690000023
is the convolution kernel of layer l;
Figure FDA0003655824690000024
Figure FDA0003655824690000025
global features representing the final output of the upsampling layer;
4) and outputting the number of pedestrians in the video detection area of the current service area through the trained network model, and calculating the ratio of the number of pedestrians to the area of the detection area to obtain the current environment comfort level.
2. An electronic device, comprising: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing therein computer instructions, and the processor executing the computer instructions to perform the service area environment comfort level detecting method of claim 1.
3. A computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the service area environment comfort level detecting method of claim 1.
CN202010779020.9A 2020-08-05 2020-08-05 Service area environment comfort level detection method and device and readable storage medium Active CN111914765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010779020.9A CN111914765B (en) 2020-08-05 2020-08-05 Service area environment comfort level detection method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010779020.9A CN111914765B (en) 2020-08-05 2020-08-05 Service area environment comfort level detection method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111914765A CN111914765A (en) 2020-11-10
CN111914765B true CN111914765B (en) 2022-07-12

Family

ID=73287863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010779020.9A Active CN111914765B (en) 2020-08-05 2020-08-05 Service area environment comfort level detection method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111914765B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960195A (en) * 2017-03-27 2017-07-18 深圳市丰巨泰科电子有限公司 A kind of people counting method and device based on deep learning
CN107992841A (en) * 2017-12-13 2018-05-04 北京小米移动软件有限公司 The method and device of identification objects in images, electronic equipment, readable storage medium storing program for executing
CN108921822A (en) * 2018-06-04 2018-11-30 中国科学技术大学 Image object method of counting based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11188823B2 (en) * 2016-05-31 2021-11-30 Microsoft Technology Licensing, Llc Training a neural network using another neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960195A (en) * 2017-03-27 2017-07-18 深圳市丰巨泰科电子有限公司 A kind of people counting method and device based on deep learning
CN107992841A (en) * 2017-12-13 2018-05-04 北京小米移动软件有限公司 The method and device of identification objects in images, electronic equipment, readable storage medium storing program for executing
CN108921822A (en) * 2018-06-04 2018-11-30 中国科学技术大学 Image object method of counting based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《AFDet: Anchor Free One Stage 3D Object Detection》;Runzhou Ge et al;;《arXiv:2006.12671v2》;20200730;第1-10页; *
《基于深度学习的高速服务区车位检测算法》;邵奇可 等;;《计算机系统应用》;20190630;第28卷(第6期);第62-68页; *

Also Published As

Publication number Publication date
CN111914765A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN110428428B (en) Image semantic segmentation method, electronic equipment and readable storage medium
CN110619369B (en) Fine-grained image classification method based on feature pyramid and global average pooling
CN111563902B (en) Lung lobe segmentation method and system based on three-dimensional convolutional neural network
CN111598030B (en) Method and system for detecting and segmenting vehicle in aerial image
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
EP3065084A1 (en) Image recognition method, image recognition device, and recording medium
CN113421269A (en) Real-time semantic segmentation method based on double-branch deep convolutional neural network
CN111612754A (en) MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111597920B (en) Full convolution single-stage human body example segmentation method in natural scene
CN109034024B (en) Logistics vehicle type classification and identification method based on image target detection
CN112084890B (en) Method for identifying traffic signal sign in multiple scales based on GMM and CQFL
WO2021185121A1 (en) Model generation method and apparatus, object detection method and apparatus, device, and storage medium
CN110659601B (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN114092917B (en) MR-SSD-based shielded traffic sign detection method and system
CN111259733A (en) Point cloud image-based ship identification method and device
CN113763371A (en) Pathological image cell nucleus segmentation method and device
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN117474796B (en) Image generation method, device, equipment and computer readable storage medium
CN114639101A (en) Emulsion droplet identification system, method, computer equipment and storage medium
CN112435214B (en) Priori frame linear scaling-based pollen detection method and device and electronic equipment
CN111914765B (en) Service area environment comfort level detection method and device and readable storage medium
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN111914766B (en) Method for detecting business trip behavior of city management service
CN114863104B (en) Image segmentation method based on label distribution learning
CN114494861B (en) Aircraft target detection method based on multi-parameter optimization YOLOV network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant