CN110619271A - Pedestrian re-identification method based on depth region feature connection - Google Patents
Pedestrian re-identification method based on depth region feature connection Download PDFInfo
- Publication number
- CN110619271A CN110619271A CN201910741523.4A CN201910741523A CN110619271A CN 110619271 A CN110619271 A CN 110619271A CN 201910741523 A CN201910741523 A CN 201910741523A CN 110619271 A CN110619271 A CN 110619271A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- image
- network
- feature
- connection layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000011176 pooling Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 62
- 238000013135 deep learning Methods 0.000 abstract description 9
- 238000012545 processing Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 241000271897 Viperidae Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian re-identification method based on depth region feature connection, which comprises the following steps of 1): given image P to be matcheddAnd Pc(ii) a Step 2): designing a feature extraction network, wherein Resnet is used as a network backbone in the network, and a final average pooling layer and a softmax layer are removed and named as FPEN; step 3): giving the image P to be matched by using attitude estimation algorithmdAnd PcCarrying out attitude prediction on the pedestrian to obtain a pedestrian framework; according to the pedestrian re-identification method, the pedestrian attitude estimation and deep learning methods are utilized, the pedestrian image is subjected to accurate sub-region division processing, meanwhile, different characteristics among the sub-regions are utilized for feature connection, overall and local image features are comprehensively utilized, and compared with other methods, the accuracy of pedestrian re-identification and the robustness of the pedestrian re-identification method on each pedestrian re-identification data set are improved.
Description
Technical Field
The invention relates to the fields of deep learning, pedestrian attitude estimation, computer vision and the like, in particular to a pedestrian re-identification method based on deep area feature connection.
Background
Along with the increasing development of the pedestrian re-identification technology, the identification accuracy and the identification efficiency are improved, the technology is more and more applied to the field of intelligent security, and the more and more important functions are played in the fields of public security, criminal investigation, public safety and the like. Moreover, the pedestrian re-identification technology also plays an important role in the emerging fields of unmanned supermarkets, photo album clusters and the like. With the arrival of the big data era, the pedestrian matching data sets in the pedestrian re-identification field are increasingly huge, the data amount and the data types are increased sharply, from 1264 images in the early stage, 632 pedestrians only comprise two-camera VIPeR data sets to the present, 126441 images, 4101 pedestrians and MSMT17 data sets comprising 15 cameras, the data are enriched and diversified, and meanwhile, great challenges are brought to the efficiency and the accuracy of pedestrian re-identification. Through development for many years and coming of the deep learning era, the mainstream of the pedestrian re-identification technology is mainly a deep learning method at present. However, in the present view, the pedestrian re-identification technology is far from reaching the commercial mature technical standard, and the pedestrian re-identification method based on deep learning still has a great exploration and progress space. And from the perspective of data set migration, the matching accuracy of pedestrian re-identification after data set migration is greatly reduced, and the robustness is not high.
Therefore, the method for re-identifying the pedestrian by using the deep learning and region segmentation method still has higher research value and significance and has very great feasibility.
Zhang (ICCV, 2015) and the like use an MSTR (multi-task spark registration, which does not need to align and uses sparse feature expression, i.e. dictionary method to represent features) method to re-identify pedestrians, meanwhile, constraints are added to reduce the number of mis-aligned and matched image blocks, an image block similarity scoring mechanism is introduced, global component matching is considered, and mis-matching is reduced by using upper spatial layout information; kim et al (hong kong chinese university, 2017) use CNN, RoI posing and Attention models in combination to re-identify pedestrians, use the RoI posing layer to extract feature vectors corresponding to predefined portions of an input image, then selectively focus on subsets of CNN feature vectors through the Attention model, and divide the human body into 13 local portions under the network framework to cope with the occlusion of pedestrians at different positions; su et al (Qinghua university, 2017) propose a posture-driven deep convolution (PDC) model, which utilizes skeleton information of each part of the human body to reduce posture change, learns more robust feature representation from a global image and each different local part, and designs a posture-driven feature weighting sub-network to learn adaptive feature fusion to match features from the whole body and each local body part of the human body; sun et al (CVPR, 2018) design a Baseline-PCB (Part-based connected basic) network, which can obtain a comprehensive descriptor obtained from characteristics of several local levels for pedestrian matching of a reid task, consider continuity of information transition among parts of a pedestrian image in Baseline, adopt an idea of network countermeasure during training, and use an RPP (refined Part posing) strategy to generate continuity in the parts, so that a local-based model finally obtains stronger applicability and robustness; zhao (Zhongkou, 2017) and the like are unsupervised and migrated to an Re-ID data set by utilizing a posture estimation model trained on other data sets to obtain the positioning of local features, local feature information (part-levelfeatures) is extracted, and final pedestrian features are obtained for matching in a mode of combining a human body region guide multi-stage Feature Extraction Network (FEN) and a tree structure competitive Feature Fusion Network (FFN); he (zhongkou, 2018) and the like propose a local reID method which integrates sparse reconstruction learning and deep learning, does not require alignment of pedestrian images, and has no size constraint on the size of an input image, and an end-to-end depth model is trained by minimizing the reconstruction error of the pedestrian images of the same id and maximizing the reconstruction errors of different ids.
Although the above documents and methods all refer to the matching of pedestrian images by deep learning and the like, the following disadvantages still exist:
(1) the method only aims at one data set, and the robustness of the segmentation of the local area of the pedestrian is not high, so that the overall recognition robustness is low;
(2) the re-identification accuracy is not high, the expected effect is still not achieved, and the accuracy is still low under the two evaluation indexes of average accuracy and Rank-1.
Therefore, how to design a new method for pedestrian re-identification, so that the pedestrian re-identification has higher average accuracy and Rank-1, and the pedestrian re-identification is a problem which needs to be solved at present and is good in performance in each data set.
Disclosure of Invention
In order to overcome the defects of the algorithm and the method and improve the efficiency and the accuracy of pedestrian re-identification, the invention provides a pedestrian re-identification method based on depth region characteristic connection.
The technical scheme of the invention is as follows:
a pedestrian re-identification method based on depth region feature connection is characterized by comprising the following steps:
step 1): given image P to be matcheddAnd Pc;
Step 2): designing a feature extraction network, wherein Resnet is used as a network backbone in the network, and a final average pooling layer and a softmax layer are removed and named as FPEN;
step 3): giving the image P to be matched by using attitude estimation algorithmdAnd PcCarrying out attitude prediction on the pedestrian to obtain a pedestrian framework;
step 4): image P according to skeletondThe pedestrian in (1) is divided into five sub-image parts, namely a head part, a left trunk part, a right trunk part, an upper leg part and a lower leg part, which are respectively marked as Pdh、Pdl、Pdr、PduAnd Pdd;
Step 5): image P according to skeletoncThe pedestrian in (1) is divided into five sub-image parts, namely a head part, a left trunk part, a right trunk part, an upper leg part and a lower leg part, which are respectively marked as Pch、Pcl、Pcr、PcuAnd Pcd;
Step 6): image PdAnd PcRespectively putting the rows into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors VdtAnd Vct;
Step 7): sub-graph Pdh、Pdl、Pdr、PduAnd PddRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vdh、Vdl、Vdr、VduAnd Vdd;
Step 8): sub-graph Pch、Pcl、Pcr、PcuAnd PcdRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vch、Vcl、Vcr、VcuAnd Vcd;
Step 9): using a full connection layer to connect the feature vectors VdlAnd VdrConnecting to obtain a new feature vector Vdm;
Step 10): using a full connection layer to connect the feature vectors VduAnd VddConnecting to obtain a new feature vector Vdn;
Step 11): using a full connection layer to connect the feature vectors VclAnd VcrConnecting to obtain a new feature vector Vcm;
Step 12): using a full connection layer to connect the feature vectors VcuAnd VcdConnecting to obtain a new feature vector Vcn;
Step 13): using a full connection layer to connect the feature vectors Vdt、VdmAnd VdnConnecting to obtain a new feature vector Vdb;
Step 14): using a full connection layer to connect the feature vectors Vct、VcmAnd VcnConnecting to obtain a new feature vector Vcb;
Step 15): using a full connection layer to connect the feature vectors VdbAnd VdtConnecting to obtain image PdIs finally characterized by the vector Vd;
Step 16): using a full connection layer to connect the feature vectors VcbAnd VctConnecting to obtain image PcIs the most important ofFinal feature description vector Vc;
Step 17): calculating the similarity between the characteristic vectors according to a cosine distance formula to obtain the similar distance D (P) between the two imagesd,Pc);
Step 18): if similar distance D (P)d,Pc) If the number of the pictures is larger than the set threshold value T, the two pictures are considered as the same person, otherwise, the pictures are not considered as the same person.
The invention has the advantages that: by utilizing the pedestrian posture estimation and deep learning methods, the pedestrian image is subjected to accurate subregion division processing, meanwhile, different characteristics among subregions are utilized for characteristic connection, overall and local image characteristics are comprehensively utilized, and compared with other methods, the accuracy of pedestrian re-identification and the robustness of a pedestrian re-identification method on each pedestrian re-identification data set are improved.
Drawings
FIG. 1 shows an image P to be matcheddA skeleton diagram;
FIG. 2 shows an image P to be matchedcA skeleton diagram;
FIG. 3 is a diagram of an image P to be matcheddA body segmentation map;
FIG. 4 shows an image P to be searchedcBody segmentation map.
Detailed Description
The following describes a specific embodiment of the pedestrian re-identification method based on the depth region feature connection according to the present invention based on an example.
Step 1): given image P to be matcheddAnd Pc;
Step 2): designing a feature extraction network, wherein Resnet is used as a network backbone in the network, and a final average pooling layer and a softmax layer are removed and named as FPEN;
step 3): giving the image P to be matched by using attitude estimation algorithmdAnd PcCarrying out attitude prediction on the pedestrian to obtain a pedestrian skeleton; in this embodiment, the pedestrian pose estimation algorithm is openpos, and the image P to be matched is specifically defineddAnd PcThe pedestrian skeleton diagram is shown in figures 1 and 2;
step 4): image P according to skeletondThe pedestrian in (1) is divided into five sub-image parts, namely a head part, a left trunk part, a right trunk part, an upper leg part and a lower leg part, which are respectively marked as Pdh、Pdl、Pdr、PduAnd Pdd(ii) a In the present embodiment, the specific divided image is as shown in fig. 3;
step 5): image P according to skeletoncThe pedestrian in (1) is divided into five sub-image parts, namely a head part, a left trunk part, a right trunk part, an upper leg part and a lower leg part, which are respectively marked as Pch、Pcl、Pcr、PcuAnd Pcd(ii) a In the present embodiment, the specific divided image is as shown in fig. 4;
step 6): image PdAnd PcRespectively putting the rows into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors VdtAnd Vct;
Step 7): sub-graph Pdh、Pdl、Pdr、PduAnd PddRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vdh、Vdl、Vdr、VduAnd Vdd;
Step 8): sub-graph Pch、Pcl、Pcr、PcuAnd PcdRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vch、Vcl、Vcr、VcuAnd Vcd;
Step 9): using a full connection layer to connect the feature vectors VdlAnd VdrConnecting to obtain a new feature vector Vdm;
Step 10): using a full connection layer to connect the feature vectors VduAnd VddConnecting to obtain a new feature vector Vdn;
Step 11): using a full connection layer to connect the feature vectors VclAnd VcrConnecting to obtain a new feature vector Vcm;
Step 12): using a full connection layer to connect the feature vectors VcuAnd VcdConnecting to obtain a new feature vector Vcn;
Step 13): using a full connection layer to connect the feature vectors Vdt、VdmAnd VdnConnecting to obtain a new feature vector Vdb;
Step 14): using a full connection layer to connect the feature vectors Vct、VcmAnd VcnConnecting to obtain a new feature vector Vcb;
Step 15): using a full connection layer to connect the feature vectors VdbAnd VdtConnecting to obtain image PdIs finally characterized by the vector Vd;
Step 16): using a full connection layer to connect the feature vectors VcbAnd VctConnecting to obtain image PcIs finally characterized by the vector Vc;
Step 17): calculating the similarity between the characteristic vectors according to a cosine distance formula to obtain the similar distance D (P) between the two imagesd,Pc);
Step 18): if similar distance D (P)d,Pc) If the number of the pictures is larger than the set threshold value T, the two pictures are considered as the same person, otherwise, the two pictures are not considered as the same person; in the present embodiment, the threshold value of T is set to 0.74.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.
Claims (1)
1. A pedestrian re-identification method based on depth region feature connection is characterized by comprising the following steps:
step 1): given image P to be matcheddAnd Pc;
Step 2): designing a feature extraction network, wherein Resnet is used as a network backbone, the last average pooling layer and softmax layer are removed, and the network is named as FPEN;
step 3): giving the image P to be matched by using attitude estimation algorithmdAnd PcCarrying out attitude prediction on the pedestrian to obtain a pedestrian framework;
step 4): image P according to skeletondThe pedestrian in (1) is divided into five sub-image parts of head, left trunk, right trunk, upper leg and lower leg, which are respectively marked as Pdh、Pdl、Pdr、PduAnd Pdd;
Step 5): image P according to skeletoncThe pedestrian in (1) is divided into five sub-image parts of head, left trunk, right trunk, upper leg and lower leg, which are respectively marked as Pch、Pcl、Pcr、PcuAnd Pcd;
Step 6): image PdAnd PcEach subgraph is respectively put into an FPEN network for feature extraction, and a full-link layer is respectively added at the tail end of the network to respectively obtain a feature vector VdtAnd Vct;
Step 7): sub-graph Pdh、Pdl、Pdr、PduAnd PddRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vdh、Vdl、Vdr、VduAnd Vdd;
Step 8): sub-graph Pch、Pcl、Pcr、PcuAnd PcdRespectively putting the obtained data into an FPEN network for feature extraction, and respectively adding a full-connection layer at the tail end of the network to respectively obtain feature vectors Vch、Vcl、Vcr、VcuAnd Vcd;
Step 9): using a full connection layer to connect the feature vectors VdlAnd VdrConnecting to obtain a new feature vector Vdm;
Step 10): using a full connection layer to connect the feature vectors VduAnd VddIs connected to obtainNew feature vector Vdn;
Step 11): using a full connection layer to connect the feature vectors VclAnd VcrConnecting to obtain a new feature vector Vcm;
Step 12): using a full connection layer to connect the feature vectors VcuAnd VcdConnecting to obtain a new feature vector Vcn;
Step 13): using a full connection layer to connect the feature vectors Vdt、VdmAnd VdnConnecting to obtain a new feature vector Vdb;
Step 14): using a full connection layer to connect the feature vectors Vct、VcmAnd VcnConnecting to obtain a new feature vector Vcb;
Step 15): using a full connection layer to connect the feature vectors VdbAnd VdtConnecting to obtain image PdIs finally characterized by the vector Vd;
Step 16): using a full connection layer to connect the feature vectors VcbAnd VctConnecting to obtain image PcIs finally characterized by the vector Vc;
Step 17): calculating the similarity between the characteristic vectors according to a cosine distance formula to obtain the similar distance D (P) between the two imagesd,Pc);
Step 18): if similar distance D (P)d,Pc) If the number of the pictures is larger than the set threshold value T, the two pictures are considered as the same person, otherwise, the pictures are not considered as the same person.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910741523.4A CN110619271A (en) | 2019-08-12 | 2019-08-12 | Pedestrian re-identification method based on depth region feature connection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910741523.4A CN110619271A (en) | 2019-08-12 | 2019-08-12 | Pedestrian re-identification method based on depth region feature connection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110619271A true CN110619271A (en) | 2019-12-27 |
Family
ID=68921807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910741523.4A Pending CN110619271A (en) | 2019-08-12 | 2019-08-12 | Pedestrian re-identification method based on depth region feature connection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619271A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733695A (en) * | 2021-01-04 | 2021-04-30 | 电子科技大学 | Unsupervised key frame selection method in pedestrian re-identification field |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316031A (en) * | 2017-07-04 | 2017-11-03 | 北京大学深圳研究生院 | The image characteristic extracting method recognized again for pedestrian |
CN107832672A (en) * | 2017-10-12 | 2018-03-23 | 北京航空航天大学 | A kind of pedestrian's recognition methods again that more loss functions are designed using attitude information |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
CN109101865A (en) * | 2018-05-31 | 2018-12-28 | 湖北工业大学 | A kind of recognition methods again of the pedestrian based on deep learning |
CN109886113A (en) * | 2019-01-17 | 2019-06-14 | 桂林远望智能通信科技有限公司 | A kind of spacious view pedestrian recognition methods again based on region candidate network |
-
2019
- 2019-08-12 CN CN201910741523.4A patent/CN110619271A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316031A (en) * | 2017-07-04 | 2017-11-03 | 北京大学深圳研究生院 | The image characteristic extracting method recognized again for pedestrian |
CN107832672A (en) * | 2017-10-12 | 2018-03-23 | 北京航空航天大学 | A kind of pedestrian's recognition methods again that more loss functions are designed using attitude information |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
CN109101865A (en) * | 2018-05-31 | 2018-12-28 | 湖北工业大学 | A kind of recognition methods again of the pedestrian based on deep learning |
CN109886113A (en) * | 2019-01-17 | 2019-06-14 | 桂林远望智能通信科技有限公司 | A kind of spacious view pedestrian recognition methods again based on region candidate network |
Non-Patent Citations (1)
Title |
---|
HAIYU ZHAO,ET AL: "Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733695A (en) * | 2021-01-04 | 2021-04-30 | 电子科技大学 | Unsupervised key frame selection method in pedestrian re-identification field |
CN112733695B (en) * | 2021-01-04 | 2023-04-25 | 电子科技大学 | Unsupervised keyframe selection method in pedestrian re-identification field |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112101150B (en) | Multi-feature fusion pedestrian re-identification method based on orientation constraint | |
CN110956185B (en) | Method for detecting image salient object | |
Song et al. | Region-based quality estimation network for large-scale person re-identification | |
CN108764065B (en) | Pedestrian re-recognition feature fusion aided learning method | |
Reddy Mopuri et al. | Object level deep feature pooling for compact image representation | |
Chen et al. | Detection evolution with multi-order contextual co-occurrence | |
CN110796026A (en) | Pedestrian re-identification method based on global feature stitching | |
CN109828251A (en) | Radar target identification method based on feature pyramid light weight convolutional neural networks | |
CN108509859A (en) | A kind of non-overlapping region pedestrian tracting method based on deep neural network | |
CN111310668B (en) | Gait recognition method based on skeleton information | |
CN113688894B (en) | Fine granularity image classification method integrating multiple granularity features | |
CN110472591B (en) | Shielded pedestrian re-identification method based on depth feature reconstruction | |
CN110674874A (en) | Fine-grained image identification method based on target fine component detection | |
CN112069940A (en) | Cross-domain pedestrian re-identification method based on staged feature learning | |
CN110163117B (en) | Pedestrian re-identification method based on self-excitation discriminant feature learning | |
CN101944183B (en) | Method for identifying object by utilizing SIFT tree | |
CN110348383A (en) | A kind of road axis and two-wire extracting method based on convolutional neural networks recurrence | |
CN114299542A (en) | Video pedestrian re-identification method based on multi-scale feature fusion | |
Ikizler-Cinbis et al. | Web-based classifiers for human action recognition | |
CN110443174B (en) | Pedestrian re-identification method based on decoupling self-adaptive discriminant feature learning | |
CN113221770A (en) | Cross-domain pedestrian re-identification method and system based on multi-feature hybrid learning | |
CN111291705B (en) | Pedestrian re-identification method crossing multiple target domains | |
Li et al. | Locally-enriched cross-reconstruction for few-shot fine-grained image classification | |
Pang et al. | Analysis of computer vision applied in martial arts | |
Dong et al. | Parsing based on parselets: A unified deformable mixture model for human parsing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191227 |
|
RJ01 | Rejection of invention patent application after publication |