CN111339861A - Seat occupancy state detection method - Google Patents

Seat occupancy state detection method Download PDF

Info

Publication number
CN111339861A
CN111339861A CN202010097035.7A CN202010097035A CN111339861A CN 111339861 A CN111339861 A CN 111339861A CN 202010097035 A CN202010097035 A CN 202010097035A CN 111339861 A CN111339861 A CN 111339861A
Authority
CN
China
Prior art keywords
frame
seat
bounding box
default
box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010097035.7A
Other languages
Chinese (zh)
Inventor
王斌
刘廷泰
唐蕾
刘传清
金择真
刘攀锋
于子昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202010097035.7A priority Critical patent/CN111339861A/en
Publication of CN111339861A publication Critical patent/CN111339861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A seat occupation state detection method is realized by a target detection technology, belongs to the technical field of deep learning, and comprises the steps of collecting and using images of a human head in all directions, analyzing and extracting data, and training a human head detection model; the trained model is loaded into a program, the seat position in the image is manually marked in advance, then the head of a person is detected on the image of the seat position, so that the occupation state of the seat is judged, and the detected data is fed back to the system.

Description

Seat occupancy state detection method
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a seat occupancy state detection method.
Background
Deep learning is a new research direction in the field of machine learning, and is a method capable of realizing artificial intelligence. It is an intrinsic rule and a presentation hierarchy of learning sample data, and information obtained in the learning process is very helpful for interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds.
In recent years, with the continuous development and maturity of deep learning, a leap-forward combination of machine vision has been achieved, especially in the direction of combination with computer vision. Meanwhile, the hardware equipment and the electronic technology field are continuously developed, so that the combination of the monitoring equipment and the target detection technology is promoted, and the monitoring system is intelligentized to be possible. The existing target detection is mostly realized by using a deep learning algorithm, for example: YOLO series algorithm: YOLO is a single step one-stage, is divided into candidate region generation and region classification, and includes YOLOV1, YOLOV2, YOLOV3 algorithms; SSD algorithm: by adopting the idea of grid division, different scales are considered on feature maps with different scales, and RPN considers different scales on one feature map. To detect targets of different scales, the SSD performs a sliding window scan on the feature maps of different convolutional layers. A small object is detected in the feature map output from the preceding convolutional layer, and a large object is detected in the feature map of the following convolutional layer. Compared with the two algorithms, the YOLO algorithm is difficult to detect small targets and inaccurate in positioning, but the SSD can overcome the disadvantages to some extent.
Meanwhile, the seat occupation state detection device adopting the target detection method has the advantages of more convenient installation, more ideal realization, higher reliability and less cost investment compared with the detection of the seat state through hardware.
By combining the research methods, the target detection can be realized by using some deep learning algorithms, and the SSD algorithm is more practical. The SSD algorithm can be realized by using a TensorFlow framework, and the calculation of various tensors is completed, so that the target detection is realized.
Disclosure of Invention
Technical problem to be solved
The invention aims to provide a vehicle detection distance measuring method based on lane line detection for assisting driving, which aims to solve the practical problems that the machine vision technology is utilized to detect the head of a seat in scenes such as a library, a study room and the like, whether the seat is occupied by a person is judged, the waste of seat resources is avoided, and the utilization rate of the seat is improved.
(II) technical scheme
In order to achieve the purpose, the invention provides the following technical scheme: a seat occupancy state detection method is characterized by comprising the following steps:
step 1: and (5) making a data set. Collecting a sample set, wherein the sample mainly comprises images of human heads in all directions and images of the whole human body, and making a deep learning target detection training set for training a human head detection model by manually marking labels;
step 2: deep learning and training a human head detection model;
and step 3: and detecting the seat state.
Further, the step 2 specifically comprises the following steps:
(1) acquiring images in a data set, processing input images by utilizing a series of convolution operations, and generating feature maps with different sizes;
(2) carrying out 3 × 3 convolution processing on the obtained feature map, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) and executing an NMS algorithm, determining the corresponding relation between the real label and the default frame, matching the default frame with the real label frame when the intersection ratio of the real label frame and the default frame is higher than a threshold value of 0.5, and outputting the default frame, namely the position surrounding frame of the specified target.
Further, the step 3 specifically includes the following steps:
(1) the camera acquires image data, and manually marks the seat position in the image by using a matrix frame;
(2) respectively acquiring images in each matrix frame, wherein each image corresponds to a corresponding seat, performing a series of convolution operations on the images to generate a feature map, performing 3 × 3 convolution processing on the feature map, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) determining a human head and a confidence value according to the human head confidence, and filtering a prediction frame belonging to the background;
(5) and decoding the remaining prediction frames, performing descending order according to the confidence, then only reserving the first 400 prediction frames, executing an NMS algorithm, filtering the prediction frames with larger overlap, obtaining a detection result if the remaining prediction frames exist, and obtaining a head position surrounding frame in the image, thereby judging that the corresponding seat is occupied, otherwise, the corresponding seat is empty.
Further, the method for evaluating the default bounding box in step 1 (2) and step 3 (2) is as follows:
and on the feature map after the series of convolutions, a series of homocentric default bounding boxes are generated by taking the center point of each grid point on the feature map as the center. The square bounding box has a minimum side length of mini _ size and a maximum side length of (mini _ size max _ size) × 1/2, and an aspect ratio (aspect ratio) is defined to generate two rectangles.
And mini _ size and max _ size of the default frame in the feature map are calculated by the following formulas:
Figure BDA0002385454770000031
wherein s iskThe ratio of the size of the bounding box to the picture, and m is the number of feature maps.
Further, the implementation of the predicted value of the bounding box in step 2 (3) and step 3 (3) is as follows:
lcx=(bcx-dcx)/dw,lcx=(bcy-dcy)/dh
lw=log(bw/dw),lh=log(bh/dh)
wherein d ═ dcx,dcy,dw,dh) A priori frame position, and b ═ bcx,bcy,bw,bh) The predicted value l of the bounding box is the converted value of b relative to d, called coding.
Further, the implementation of confidence level error in step 3 (4) is as follows:
Figure BDA0002385454770000032
where c is the category confidence prediction value,
Figure BDA0002385454770000033
is an indication parameter when
Figure BDA0002385454770000034
Time indicates that the ith prior frame is matched with the jth group, and the class of the group is p.
Further, the decoding and NMS implementation in (4) of step 2 and (5) of step 3:
1) and (3) decoding:
when predicting, decoding to obtain the real position b of the bounding box:
bcx=dcx+dwlcx,bcy=dcy+dylcy
bw=dwexp(lw),bh=dhexp(lh)
where l is the predicted value of the bounding box, d ═ dcx,dcy,dw,dh) Is the prior frame position.
2) NMS (non-maxima suppression):
when the two box spatial locations are very close, the higher box is taken as a reference, see IoU how the coincidence is, if the coincidence exceeds the threshold, the smaller box of score is suppressed, and only the prediction box with the larger score is retained, so that the more overlapping prediction box is filtered.
Further, the calculation method of the IOU in the NMS is:
Figure BDA0002385454770000041
wherein A is a set [ x ]1,x2]B is the set [ y1,y2]J (A, B) is the cross-over ratio.
(III) advantageous effects
The invention provides a seat occupancy state detection method, which considers the factors such as technology, cost, difficulty in realization and the like in practical application, utilizes an SSD target detection algorithm, designs a human head detection model and a seat occupancy state detection method based on deep learning, and has lower realization cost and easier realization compared with the traditional seat occupancy state detection method; through actual test and theoretical derivation, the invention has high detection accuracy and strong practicability, and can be applied to occasions for detecting the occupation states of seats such as libraries, study rooms and the like after improvement.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a seat position marked in an image by a rectangular frame.
FIG. 2 is a schematic diagram of the generation of a bounding box in a feature map.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: the invention relates to a seat occupation state detection device by utilizing an SSD target detection algorithm, which detects whether a person is in a seat position by a person head detection method so as to judge the seat occupation state.
The method comprises the following steps:
step 1: and (5) making a data set.
(1) A sample set is collected, wherein the sample mainly comprises images of all directions of a human head and a whole body image of the human body, and a USC pedestrian detection data set is selected as the basis of the data set for facilitating actual requirements and ensuring the objectivity of the data set.
(2) And (5) making a deep learning human head detection training set. And (3) carrying out frame marking on the head of a person in the picture through a visual operation interface by using a data marking tool 'labelImg', and automatically generating an xml file in a VOC format.
Step 2: deep learning and training of a human head detection model are carried out in the following specific processes:
(1) acquiring images in a data set, processing input images by utilizing a series of convolution operations, and generating feature maps with different sizes;
(2) carrying out 3 × 3 convolution processing on the obtained feature map, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) and executing an NMS algorithm, determining the corresponding relation between the real label and the default box, matching the default box with the real label box when the intersection ratio of the real label box and the default box is higher than a 0.5 threshold value, and outputting the default box, namely the position surrounding box of the specified target.
And step 3: seat state detection, the process is as follows:
(1) fixing a camera at the upper side position of all seats in an observable area, wherein the picture is a overlooking visual angle picture, acquiring image data by the camera, and manually marking the positions of the seats in the image by utilizing a matrix dotted line frame;
(2) carrying out a series of convolution operations on the images to generate a characteristic diagram, carrying out 3 × 3 convolution processing on the characteristic diagram, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) determining a human head and a confidence value according to the human head confidence, and filtering a prediction frame belonging to the background;
(5) and decoding the remaining prediction frames, performing descending order according to the confidence, then only reserving the first 400 prediction frames, executing an NMS algorithm, filtering the prediction frames with larger overlap, obtaining a detection result if the remaining prediction frames exist, and obtaining a head position surrounding frame in the image, thereby judging that the corresponding seat is occupied, otherwise, the corresponding seat is empty.
As shown in FIG. 1: the position of the seat is manually marked with a rectangular frame in advance on the image. The camera is fixed at the upper side position of all seats in an observable area, the picture of the camera is a overlooking visual angle picture, the position of the seat in the picture is marked by a dotted line frame, the picture in the dotted line frame is extracted during seat detection, the image of the position of the seat is directly detected, and whether the seat is occupied or not is judged.
As shown in fig. 2: and generating a bounding box in the feature map.
And on the feature map after the series of convolutions, a series of homocentric default bounding boxes are generated by taking the center point of each grid point on the feature map as the center. The square bounding box has a minimum side length of mini _ size and a maximum side length of (mini _ size max _ size) × 1/2, and an aspect ratio (aspect ratio) is defined to generate two rectangles.
And mini _ size and max _ size of the default frame in the feature map are calculated by the following formulas:
Figure BDA0002385454770000061
wherein s iskFor bounding box size versus pictureM is the number of feature maps.
And (3) calculating a predicted value of the bounding box:
lcx=(bcx-dcx)/dw,lcx=(bcy-dcy)/dh
lw=log(bw/dw),lh=log(bh/dh)
wherein d ═ dcx,dcy,dw,dh) A priori frame position, and b ═ bcx,bcy,bw,bh) The predicted value l of the bounding box is the converted value of b relative to d.
Calculation of confidence error:
Figure BDA0002385454770000062
where c is the category confidence prediction value,
Figure BDA0002385454770000063
is an indication parameter when
Figure BDA0002385454770000064
Time indicates that the ith prior frame is matched with the jth group, and the class of the group is p.
Calculation of prediction box decoding:
bcx=dcx+dwlcx,bcy=dcy+dylcy
bw=dwexp(lw),bh=dhexp(lh)
where l is the predicted value of the bounding box, d ═ dcx,dcy,dw,dh) Is the prior frame position.
And then, filtering the large overlap prediction box by using the NMS, wherein the calculation of the IOU is as follows:
Figure BDA0002385454770000065
wherein A is a set [ x ]1,x2]B is the set [ y1,y2]J (A, B) is the cross-over ratio.
And filtering to obtain a prediction frame which is a detection result.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (8)

1. A seat occupancy state detection method is characterized by comprising the following steps:
step 1: data set making, wherein a sample set is acquired, the sample mainly comprises images of the human head in all directions and images of the whole human body, and a deep learning target detection training set is made by manually marking labels and is used for training a human head detection model;
step 2: deep learning and training a human head detection model;
and step 3: and detecting the seat state.
2. A method for detecting the occupancy state of a seat as claimed in claim 1, wherein said step 2 specifically comprises the steps of:
(1) acquiring images in a data set, processing input images by utilizing a series of convolution operations, and generating feature maps with different sizes;
(2) carrying out 3 × 3 convolution processing on the obtained feature map, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) and executing an NMS algorithm, determining the corresponding relation between the real label and the default frame, matching the default frame with the real label frame when the intersection ratio of the real label frame and the default frame is higher than a threshold value of 0.5, and outputting the default frame, namely the position surrounding frame of the specified target.
3. A method for detecting a seat occupancy state according to claim 2, wherein said step 3 specifically comprises the steps of:
(1) the camera acquires image data, and manually marks the seat position in the image by using a matrix frame;
(2) respectively acquiring images in each matrix frame, wherein each image corresponds to a corresponding seat, performing a series of convolution operations on the images to generate a feature map, performing 3 × 3 convolution processing on the feature map, and evaluating a default bounding box;
(3) predicting an offset and a classification probability for each bounding box;
(4) determining a human head and a confidence value according to the human head confidence, and filtering a prediction frame belonging to the background;
(5) and decoding the remaining prediction frames, performing descending order according to the confidence, then only reserving the first 400 prediction frames, executing an NMS algorithm, filtering the prediction frames with larger overlap, obtaining a detection result if the remaining prediction frames exist, and obtaining a head position surrounding frame in the image, thereby judging that the corresponding seat is occupied, otherwise, the corresponding seat is empty.
4. A seat occupancy state detection method according to claim 3, wherein the method for evaluating the default bounding box in step 1 (2) and step 3 (2) is:
and on the feature map after the series of convolutions, a series of homocentric default bounding boxes are generated by taking the center point of each grid point on the feature map as the center. Wherein the square bounding box has a minimum dimension of mini size and a maximum dimension of (mini size max size) 1/2, and further defines an aspect ratio for generating two rectangles,
and mini _ size and max _ size of the default frame in the feature map are calculated by the following formulas:
Figure FDA0002385454760000021
wherein s iskThe ratio of the size of the bounding box to the picture, and m is the number of feature maps.
5. A seat occupancy state detection method according to claim 3, wherein the implementation of the bounding box prediction values in (3) of step 2 and (3) of step 3:
lcx=(bcx-dcx)/dw,lcx=(bcy-dcy)/dh
lw=log(bw/dw),lh=log(bh/dh)
wherein d ═ dcx,dcy,dw,dh) A priori frame position, and b ═ bcx,bcy,bw,bh) The predicted value l of the bounding box is the converted value of b relative to d, called coding.
6. A method for detecting a seat occupancy state according to claim 3, wherein the confidence level error in step (3) (4) is implemented by:
Figure FDA0002385454760000022
where c is the category confidence prediction value,
Figure FDA0002385454760000023
is an indication parameter when
Figure FDA0002385454760000024
Time indicates that the ith prior frame is matched with the jth group channel, and the class of the group channel is p.
7. A seat occupancy state detection method according to claim 3, wherein the decoding in (4) of step 2 and (5) of step 3 and the implementation of the NMS:
1) and (3) decoding:
when predicting, decoding to obtain the real position b of the bounding box:
bcx=dcx+dwlcx,bcy=dcy+dylcy
bw=dwexp(lw),bh=dhexp(lh)
where l is the predicted value of the bounding box, d ═ dcx,dcy,dw,dh) Is the prior frame position.
2) NMS (non-maxima suppression):
when the two box spatial locations are very close, the higher box is taken as a reference, see IoU how the coincidence is, if the coincidence exceeds the threshold, the smaller box of score is suppressed, and only the prediction box with the larger score is retained, so that the more overlapping prediction box is filtered.
8. A seat occupancy state detection method according to claim 7, wherein the calculation of IOU in the NMS is:
Figure FDA0002385454760000031
wherein A is a set [ x ]1,x2]B is the set [ y1,y2]J (A, B) is the cross-over ratio.
CN202010097035.7A 2020-02-17 2020-02-17 Seat occupancy state detection method Pending CN111339861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010097035.7A CN111339861A (en) 2020-02-17 2020-02-17 Seat occupancy state detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010097035.7A CN111339861A (en) 2020-02-17 2020-02-17 Seat occupancy state detection method

Publications (1)

Publication Number Publication Date
CN111339861A true CN111339861A (en) 2020-06-26

Family

ID=71183491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010097035.7A Pending CN111339861A (en) 2020-02-17 2020-02-17 Seat occupancy state detection method

Country Status (1)

Country Link
CN (1) CN111339861A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050981A1 (en) * 2017-08-09 2019-02-14 Shenzhen Keya Medical Technology Corporation System and method for automatically detecting a target object from a 3d image
CN109784190A (en) * 2018-12-19 2019-05-21 华东理工大学 A kind of automatic Pilot scene common-denominator target Detection and Extraction method based on deep learning
CN110490252A (en) * 2019-08-19 2019-11-22 西安工业大学 A kind of occupancy detection method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050981A1 (en) * 2017-08-09 2019-02-14 Shenzhen Keya Medical Technology Corporation System and method for automatically detecting a target object from a 3d image
CN109784190A (en) * 2018-12-19 2019-05-21 华东理工大学 A kind of automatic Pilot scene common-denominator target Detection and Extraction method based on deep learning
CN110490252A (en) * 2019-08-19 2019-11-22 西安工业大学 A kind of occupancy detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110287960B (en) Method for detecting and identifying curve characters in natural scene image
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN110969130B (en) Driver dangerous action identification method and system based on YOLOV3
CN109644255B (en) Method and apparatus for annotating a video stream comprising a set of frames
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN111813997B (en) Intrusion analysis method, device, equipment and storage medium
CN105574550A (en) Vehicle identification method and device
CN113822247B (en) Method and system for identifying illegal building based on aerial image
CN108388871B (en) Vehicle detection method based on vehicle body regression
CN111027370A (en) Multi-target tracking and behavior analysis detection method
CN111807183A (en) Elevator door state intelligent detection method based on deep learning
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN108416304B (en) Three-classification face detection method using context information
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN111598033A (en) Cargo positioning method, device and system and computer readable storage medium
CN115830302A (en) Multi-scale feature extraction and fusion power distribution network equipment positioning identification method
CN111339861A (en) Seat occupancy state detection method
CN114694090A (en) Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5
KR20170104756A (en) Local size specific vehicle classifying method and vehicle detection method using the classifying method
CN116959099B (en) Abnormal behavior identification method based on space-time diagram convolutional neural network
CN117710795B (en) Machine room line safety detection method and system based on deep learning
CN113298811B (en) Automatic counting method, device and equipment for number of people in intelligent classroom and storage medium
KR20200005853A (en) Method and System for People Count based on Deep Learning
EP4321857A1 (en) Method and apparatus for detecting welding mark, and electronic device
CN116959099A (en) Abnormal behavior identification method based on space-time diagram convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626

RJ01 Rejection of invention patent application after publication