CN111223107A - Point cloud data set manufacturing system and method based on point cloud deep learning - Google Patents

Point cloud data set manufacturing system and method based on point cloud deep learning Download PDF

Info

Publication number
CN111223107A
CN111223107A CN201911401147.0A CN201911401147A CN111223107A CN 111223107 A CN111223107 A CN 111223107A CN 201911401147 A CN201911401147 A CN 201911401147A CN 111223107 A CN111223107 A CN 111223107A
Authority
CN
China
Prior art keywords
point cloud
cloud data
deep learning
information
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911401147.0A
Other languages
Chinese (zh)
Inventor
李汉玢
周智颖
王延存
何豪杰
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN201911401147.0A priority Critical patent/CN111223107A/en
Publication of CN111223107A publication Critical patent/CN111223107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a point cloud data set manufacturing system and method based on point cloud deep learning. Wherein the method is configured to: acquiring point cloud data; converting longitude and latitude information of the point cloud data into coordinate system information; cutting the converted point cloud data along a track direction into point cloud data with the same storage area; acquiring a target to be identified in the point cloud block data, and generating a labeling container for identifying the target; generating a labeling file for describing coordinate information and reflection intensity information in a labeling container; optimizing coordinate information and reflection intensity information in the labeling file; and generating a binary format file for inputting the deep learning network model according to the optimized label file. When the method is executed, a point cloud data set after single line and multi-line fusion can be manufactured, the point cloud data set can be matched with more deep learning network models, engineering application can be performed quickly, and the engineering cost can be reduced without needing to appoint point cloud acquisition equipment.

Description

Point cloud data set manufacturing system and method based on point cloud deep learning
Technical Field
The invention relates to the technical field of measurement and control, in particular to a system and a method for manufacturing a point cloud data set based on point cloud deep learning.
Background
Deep learning network models are typically based on 64-line single frame laser data sets. But the definition of 64-line laser and single frame causes the mismatch of point cloud data to the deep learning network model in engineering application.
Disclosure of Invention
The embodiment of the invention at least discloses a point cloud data set manufacturing method based on point cloud deep learning. When the method disclosed by the embodiment is executed, a point cloud data set formed by fusing single lines and multiple lines can be manufactured, the point cloud data set can be matched with more deep learning network models, engineering application can be performed quickly, point cloud acquisition equipment does not need to be appointed, and engineering cost can be reduced.
To achieve the above, the method is configured to: acquiring point cloud data; converting longitude and latitude information of the point cloud data into coordinate system information; cutting the converted point cloud data along a track direction to obtain point cloud data with the same storage area; acquiring a target to be identified in the point cloud block data, and generating a labeling container for identifying the target; generating a labeling file for describing coordinate information and reflection intensity information in the labeling container; optimizing the coordinate information and the reflection intensity information in the labeling file; and generating a binary format file for inputting the deep learning network model according to the optimized label file.
In some embodiments of the present disclosure, point cloud data is obtained configured to: and acquiring the point cloud data after single-line and/or multi-line point cloud fusion.
In some embodiments of the disclosure, the point cloud data after the cutting and converting is point cloud data with at least one same storage area, and is configured to: the point cloud data after the cutting conversion is point cloud block data smaller than or equal to 40x20x15 cubic meters.
In some embodiments of the present disclosure, generating an annotation container identifying the target is configured to: selecting the target to be identified in the point cloud block data; generating an adjustable labeling box; and after adjustment, giving the labeled box of the target as the labeled container.
In some embodiments of the present disclosure, generating the annotation file is configured to: and generating a marking file for describing the coordinate information and the reflection intensity information in the marking container through a marking language.
In some embodiments of the present disclosure, the markup language is an extensible markup language.
In some embodiments of the present disclosure, optimizing the coordinate information and the reflection intensity information is configured to: optimizing the coordinate information and the reflection intensity information by a combination of one or more of elevation cutting, reflection intensity filtering, and data thinning.
In some embodiments disclosed herein, the coordinate information and the reflection intensity information of the optimized markup document are extracted; acquiring binary format coordinate information of the coordinate information; acquiring binary format intensity information of the reflection intensity information; and combining the binary format coordinate information and the binary format intensity information into the binary format file.
In some embodiments of the present disclosure, the deep learning network model is configured to learn the network model level by level based on a three-dimensional space.
The embodiment of the invention at least discloses a point cloud data set manufacturing system based on point cloud deep learning. The system is configured with: a point cloud acquisition module configured to acquire point cloud data; a point cloud conversion module configured to convert latitude and longitude information of the point cloud data into coordinate system information; the point cloud cutting module is configured to cut the converted point cloud data into point cloud data with the same storage area along a track direction; the point cloud labeling module is configured to acquire a target to be identified in the point cloud block data, generate a labeling container for identifying the target, and generate a labeling file for describing coordinate information and reflection intensity information in the labeling container; a point cloud processing module configured to optimize the coordinate information and the reflection intensity information in the annotation file; and the binary point cloud conversion module is configured to generate a binary format file for inputting the deep learning network model according to the optimized annotation file by the point cloud.
In view of the above, other features and advantages of the disclosed exemplary embodiments will become apparent from the following detailed description of the disclosed exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method in an embodiment;
fig. 2 is a block diagram of the system in the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
The embodiment discloses a point cloud data set manufacturing method based on point cloud deep learning. The method of the present embodiment can be executed as a sample file input to a binary format file of point cloud data of a deep learning network model. The method is performed primarily in some standardized servers and/or computing devices.
The server and/or computing device is implemented in this embodiment with at least a memory and a processor. The memory mainly comprises a program storage area and a data storage area; the storage program area may store an operating system (for example, an android operating system, abbreviated as "android system", or an ios operating system, or another operating system, where the operating system may also be abbreviated as "system"), and an application program (for example, a sound playing function, an image playing function, etc.) required by at least one function. And, the storage data area may store data created according to the use of the electronic terminal, including related setting information or use condition information of the displayed application, etc., which are referred to in the embodiments of the present application. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, and other volatile solid state storage devices.
Specifically, the method of the embodiment is executed on a computing device, the system environment is windows10, the display card is RTX2080, the development environments are pycharm and Anaconda, and the deep learning network model is Voxelnet, and the deep learning network model can identify the target in the point cloud data.
Before the method of the embodiment is executed by the computing device, the acquisition vehicle acquires the point cloud data of a track line segment on the road through the laser scanner, and the computing device fuses the multi-line point cloud data when the multi-line point cloud data exists.
After the computing device acquires the point cloud data after the single-line and/or multi-line point cloud fusion, the method of the embodiment is executed and the steps as shown in fig. 1 are implemented.
S100, converting longitude and latitude information of the point cloud data into coordinate system information, namely converting the point cloud data in an LAZ format into an LAS format.
S200, cutting the converted point cloud data into a plurality of point cloud block data of 40x20x15 cubic meters along the running track of the collected vehicle, namely the track direction of the track line segment, so that the storage areas of the point cloud block data are the same.
S300, identifying and selecting a target to be identified in the point cloud block data, and then generating an adjustable labeling box; the target is given in the marked box by rotating, stretching and other operations on the marked box; the adjusted labeled box is labeled as a labeled container.
S400, generating an XML labeling file for describing the coordinate information and the reflection intensity information in the labeling container. Optionally, the XML markup file is converted into a txt file or other format which is easy to read.
S500, coordinate information and reflection intensity information are processed through combination of elevation cutting, reflection intensity filtering and data thinning, and the main purpose is to reduce non-zero point cloud data in the XML labeling file.
S600, extracting coordinate information and reflection intensity information of the optimized XML labeling file; then binary format coordinate information of the coordinate information and binary format intensity information of the reflection intensity information are obtained; and combining the binary format coordinate information and the binary format intensity information into a bin file in a binary format, namely a point cloud data set.
Meanwhile, in the training of the deep learning network model in the embodiment, the point cloud data set and the labels of the targets corresponding to the point cloud data set are mainly selected as the sample set and the test set for training.
When the method disclosed by the embodiment is executed, a point cloud data set formed by fusing single lines and multiple lines can be manufactured, the point cloud data set can be matched with more deep learning network models, engineering application can be performed quickly, and the engineering cost can be reduced without specifying point cloud acquisition equipment.
The present embodiment further discloses a system for generating a point cloud data set based on point cloud deep learning.
Referring to fig. 2, the system of the present embodiment includes a point cloud obtaining module, a point cloud converting module, a point cloud cutting module, a point cloud labeling module, a point cloud processing module, and a binary point cloud converting module. The point cloud acquisition module is used for acquiring point cloud data when being executed; when the point cloud conversion module is executed, converting longitude and latitude information of the point cloud data into coordinate system information; when the point cloud cutting module is executed, the converted point cloud data is cut along a track direction to be point cloud data with the same storage area; when the point cloud labeling module is executed, a target to be identified in the point cloud block data is obtained, a labeling container for identifying the target is generated, and a labeling file for describing coordinate information and reflection intensity information in the labeling container is generated; when the point cloud processing module is executed, the coordinate information and the reflection intensity information in the labeling file are optimized; when the binary point cloud conversion module is executed, the point cloud generates a binary format file for inputting the deep learning network model according to the optimized marking file.
The present embodiment is described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is simple, and the related points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
The embodiments are described in a progressive mode in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is simple, and the related points can be referred to the description of the method.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A point cloud data set manufacturing method based on point cloud deep learning is characterized in that,
the method is configured to:
acquiring point cloud data;
converting longitude and latitude information of the point cloud data into coordinate system information;
cutting the converted point cloud data along a track direction to obtain point cloud data with the same storage area;
acquiring a target to be identified in the point cloud block data, and generating a labeling container for identifying the target;
generating a labeling file for describing coordinate information and reflection intensity information in the labeling container;
optimizing the coordinate information and the reflection intensity information in the labeling file;
and generating a binary format file for inputting the deep learning network model according to the optimized label file.
2. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
acquiring point cloud data configured to:
and acquiring the point cloud data after single-line and/or multi-line point cloud fusion.
3. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
cutting the converted point cloud data into point cloud block data with the same storage area, wherein the point cloud block data is configured to:
the point cloud data after the cutting conversion is point cloud block data smaller than or equal to 40x20x15 cubic meters.
4. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
generating an annotation container identifying the target, configured to:
selecting the target to be identified in the point cloud block data;
generating an adjustable labeling box;
and after adjustment, giving the labeled box of the target as the labeled container.
5. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
generating the annotation file configured to:
and generating a marking file for describing the coordinate information and the reflection intensity information in the marking container through a marking language.
6. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
the markup language is an extensible markup language.
7. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
optimizing the coordinate information and the reflected intensity information, configured to:
optimizing the coordinate information and the reflection intensity information by a combination of one or more of elevation cutting, reflection intensity filtering, and data thinning.
8. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
extracting the coordinate information and the reflection intensity information of the optimized marking file;
acquiring binary format coordinate information of the coordinate information;
acquiring binary format intensity information of the reflection intensity information;
and combining the binary format coordinate information and the binary format intensity information into the binary format file.
9. The method for creating a point cloud data set based on point cloud deep learning according to claim 1,
the deep learning network model is configured to learn a network model layer by layer based on a three-dimensional space.
10. A point cloud data set making system based on point cloud deep learning is characterized in that,
the system is configured with:
a point cloud acquisition module configured to acquire point cloud data;
a point cloud conversion module configured to convert latitude and longitude information of the point cloud data into coordinate system information;
the point cloud cutting module is configured to cut the converted point cloud data into point cloud data with the same storage area along a track direction;
the point cloud labeling module is configured to acquire a target to be identified in the point cloud block data, generate a labeling container for identifying the target, and generate a labeling file for describing coordinate information and reflection intensity information in the labeling container;
a point cloud processing module configured to optimize the coordinate information and the reflection intensity information in the annotation file;
and the binary point cloud conversion module is configured to generate a binary format file for inputting the deep learning network model according to the optimized annotation file by the point cloud.
CN201911401147.0A 2019-12-31 2019-12-31 Point cloud data set manufacturing system and method based on point cloud deep learning Pending CN111223107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401147.0A CN111223107A (en) 2019-12-31 2019-12-31 Point cloud data set manufacturing system and method based on point cloud deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401147.0A CN111223107A (en) 2019-12-31 2019-12-31 Point cloud data set manufacturing system and method based on point cloud deep learning

Publications (1)

Publication Number Publication Date
CN111223107A true CN111223107A (en) 2020-06-02

Family

ID=70808227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401147.0A Pending CN111223107A (en) 2019-12-31 2019-12-31 Point cloud data set manufacturing system and method based on point cloud deep learning

Country Status (1)

Country Link
CN (1) CN111223107A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN112446907A (en) * 2020-11-19 2021-03-05 武汉中海庭数据技术有限公司 Method and device for registering single-line point cloud and multi-line point cloud
CN112785714A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Point cloud instance labeling method and device, electronic equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
CN109471128A (en) * 2018-08-30 2019-03-15 福瑞泰克智能系统有限公司 A kind of positive sample production method and device
CN109711336A (en) * 2018-12-26 2019-05-03 深圳高速工程顾问有限公司 Roadmarking determines method, apparatus, storage medium and computer equipment
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110084895A (en) * 2019-04-30 2019-08-02 上海禾赛光电科技有限公司 The method and apparatus that point cloud data is labeled
CN110210398A (en) * 2019-06-03 2019-09-06 宁波智能装备研究院有限公司 A kind of three-dimensional point cloud semantic segmentation mask method
CN110222626A (en) * 2019-06-03 2019-09-10 宁波智能装备研究院有限公司 A kind of unmanned scene point cloud target mask method based on deep learning algorithm
CN110263652A (en) * 2019-05-23 2019-09-20 杭州飞步科技有限公司 Laser point cloud data recognition methods and device
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110349260A (en) * 2019-07-11 2019-10-18 武汉中海庭数据技术有限公司 A kind of pavement strip extraction method and device
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107533630A (en) * 2015-01-20 2018-01-02 索菲斯研究股份有限公司 For the real time machine vision of remote sense and wagon control and put cloud analysis
CN109214248A (en) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 The method and apparatus of the laser point cloud data of automatic driving vehicle for identification
CN109471128A (en) * 2018-08-30 2019-03-15 福瑞泰克智能系统有限公司 A kind of positive sample production method and device
CN109711336A (en) * 2018-12-26 2019-05-03 深圳高速工程顾问有限公司 Roadmarking determines method, apparatus, storage medium and computer equipment
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110084895A (en) * 2019-04-30 2019-08-02 上海禾赛光电科技有限公司 The method and apparatus that point cloud data is labeled
CN110263652A (en) * 2019-05-23 2019-09-20 杭州飞步科技有限公司 Laser point cloud data recognition methods and device
CN110210398A (en) * 2019-06-03 2019-09-06 宁波智能装备研究院有限公司 A kind of three-dimensional point cloud semantic segmentation mask method
CN110222626A (en) * 2019-06-03 2019-09-10 宁波智能装备研究院有限公司 A kind of unmanned scene point cloud target mask method based on deep learning algorithm
CN110349260A (en) * 2019-07-11 2019-10-18 武汉中海庭数据技术有限公司 A kind of pavement strip extraction method and device
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LINESYOO 等: "KITTI数据集数据初体验", 《HTTPS://BLOG.CSDN.NET/SYYYAO/ARTICLE/DETAILS/80390284》 *
YIN ZHOU 等: "VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417965A (en) * 2020-10-21 2021-02-26 湖北亿咖通科技有限公司 Laser point cloud processing method, electronic device and storage medium
CN112446907A (en) * 2020-11-19 2021-03-05 武汉中海庭数据技术有限公司 Method and device for registering single-line point cloud and multi-line point cloud
CN112785714A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Point cloud instance labeling method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN111223107A (en) Point cloud data set manufacturing system and method based on point cloud deep learning
WO2020052312A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
CN109872392B (en) Man-machine interaction method and device based on high-precision map
CN110705115B (en) Weather forecast method and system based on deep belief network
CN108982522B (en) Method and apparatus for detecting pipe defects
CN109242801B (en) Image processing method and device
CN109086780B (en) Method and device for detecting electrode plate burrs
US10330655B2 (en) Air quality forecasting based on dynamic blending
KR102195999B1 (en) Method, device and system for processing image tagging information
CN110070076B (en) Method and device for selecting training samples
CN111340015B (en) Positioning method and device
Amatya et al. Rainfall‐induced landslide inventories for Lower Mekong based on Planet imagery and a semi‐automatic mapping method
CN108491387B (en) Method and apparatus for outputting information
CN114462469A (en) Training method of target detection model, target detection method and related device
CN111401423A (en) Data processing method and device for automatic driving vehicle
CN107084728B (en) Method and device for detecting digital map
CN114238541A (en) Sensitive target information acquisition method and device and computer equipment
US11449769B2 (en) Cognitive analytics for graphical legacy documents
CN110119721B (en) Method and apparatus for processing information
CN113758492A (en) Map detection method and device
CN111191597A (en) System and method for extracting road structure based on vector line
Doyaoen et al. Real-time building instance detection using tensorflow based on facade images for urban management
CN110136181B (en) Method and apparatus for generating information
CN113808142B (en) Ground identification recognition method and device and electronic equipment
CN110796024B (en) Automatic driving visual perception test method and device for failure sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602