CN114943771A - Novel distributed multi-camera fusion positioning and mapping system and method - Google Patents

Novel distributed multi-camera fusion positioning and mapping system and method Download PDF

Info

Publication number
CN114943771A
CN114943771A CN202210301061.6A CN202210301061A CN114943771A CN 114943771 A CN114943771 A CN 114943771A CN 202210301061 A CN202210301061 A CN 202210301061A CN 114943771 A CN114943771 A CN 114943771A
Authority
CN
China
Prior art keywords
positioning
camera
identification
color block
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210301061.6A
Other languages
Chinese (zh)
Inventor
吴洋
陈俊州
王紫晔
魏正阳
阮学彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202210301061.6A priority Critical patent/CN114943771A/en
Publication of CN114943771A publication Critical patent/CN114943771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention provides a novel distributed multi-camera fusion positioning and mapping system and method, which comprises a color block identification module, a digital image processing module, a target identification positioning and obstacle identification module and a multi-camera image splicing mapping module. The method utilizes distributed multi-camera work, and adopts color block identification module, digital image processing module, target identification and positioning and obstacle identification module and multi-camera image splicing map building module to realize rapid target identification.

Description

Novel distributed multi-camera fusion positioning and mapping system and method
Technical Field
The invention belongs to the technical field of indoor object positioning, image recognition and image fusion, and particularly relates to a novel distributed multi-camera fusion positioning and mapping system and method.
Background
The identification and positioning of indoor object identifications, image identification and image fusion have wide application in a plurality of fields, in particular to the field of multi-target systems requiring large-scale visual positioning, such as part identification and grabbing position planning of a manufacturing robot, global visual positioning of a mechanical fish school, global visual positioning of a warehouse robot and the like. At present, the problems of low positioning accuracy, difficulty in acquiring real-time images, low efficiency and poor universality generally exist in the existing large-scale indoor positioning method, and how to simply, efficiently and accurately identify and position objects is urgently needed to be solved, so that the method has great significance in the fields of manufacturing industry and the like.
Disclosure of Invention
In order to solve the technical problems, the invention provides a novel positioning and mapping system and a novel positioning and mapping method with distributed multi-camera fusion, aiming at the defects of general large-scale indoor target positioning and mapping, the system and the method utilize the distributed multi-camera to work, map building modules are spliced through a color block identification module, a digital image processing module, a target identification and obstacle identification module and multi-camera images, and color block identification is adopted for identification, so that targets can be identified rapidly.
The invention aims and effects of a novel positioning and mapping system and method with distributed multi-camera fusion, which are achieved by the following specific technical means:
a novel distributed multi-camera fusion positioning and mapping system comprises a color block identification module, a digital image processing module, a target identification positioning and obstacle identification module and a multi-camera image splicing mapping module;
the color block identification module is used for identifying an object by adopting a sticker with a black frame and a color block;
the digital image processing module is used for extracting each color block from the image through a Canny edge detection algorithm, a contour search algorithm and a range screening step, and capturing the image for image preprocessing;
the target identification and positioning and obstacle identification module is used for completing identification of an object block and distinguishing of obstacles by accessing and distance screening each point in a camera threshold range;
the multi-camera image splicing and mapping module realizes reconstruction of a large number of local coordinate systems by fusing the feature points in the transmitted multi-camera data images, realizes large-scale indoor mapping, and simultaneously constructs new positioning of the feature points in larger world coordinates to finish large-scale indoor positioning.
Furthermore, the module for identifying and positioning the target and identifying the obstacle further comprises optimizing the acquired data by adopting a median filtering method and a mean filtering method.
A novel distributed multi-camera fusion positioning and mapping method comprises the following steps:
s1, color block identification: carrying out object identification through black and white color block stickers;
s2, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outer points, thereby accurately extracting each color block in the image;
s3, distinguishing the target identification and positioning from the obstacle: scanning an image, positioning an edge black frame, positioning a color block in the frame, analyzing the position, the number and the like of the color block in the frame, identifying each identification frame, extracting information of each calibration frame and positioning;
s4, multi-camera image splicing and map building: a plurality of industrial cameras of a distributed architecture are used as data collectors, the industrial cameras cover a target area and work independently, primary processing of collected data is achieved through a carried sub server after image data are collected, data of all the distributed cameras are gathered and fused in a main server, image splicing is achieved, and large-scale indoor positioning and image construction are achieved.
Further, S1 specifically includes the following steps:
s1.1, respectively placing three black color blocks with equal size at the middle positions of the left upper corner, the right upper corner and the bottom as an identification substrate;
s1.2, the novel mark design adopts a position adding digit method to identify and count color blocks in a mark substrate at a fixed position, the number and the position of the color blocks can be used as the most intuitive mode for distinguishing different targets, and meanwhile, in order to meet the requirement on multi-target positioning, a binary system principle is utilized to give higher-level positioning codes, position adding digits support 2 9 =512 simultaneous locations, and light colored numbers can be printed on them for clear identification by the naked eye.
Further, S2 specifically includes the following steps:
s2.1, acquiring a set of outer contour points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing the original obtained picture, and searching a connected domain outline by using a findContours function;
and S2.2, extracting each color block from the image by adopting a Canny edge detection algorithm.
Further, the S2 further includes the following steps: s2.3, camera correction and precision improvement: and correcting by adopting a mode of only correcting the characteristic points, and then positioning and drawing by using the corrected characteristic points.
Further, the obstacle distinction in S3 is specifically: obtaining a set of outline points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing the original obtained picture, searching the contour of a connected domain by using a findContours function to calculate the perimeters of different identified targets, comparing the obtained perimeters with a specific value to distinguish the targets from the obstacles, and if the obtained perimeters are larger than the specific value, determining the targets as the obstacles, and if the obtained perimeters are smaller than the specific value, determining the targets as the targets.
Compared with the prior art, the invention has the following beneficial effects:
1. the method utilizes the square array sticker, adopts the color block identification for identification through the color block identification module, the digital image processing module, the target identification and obstacle identification module and the multi-camera image splicing and mapping module, and can quickly obtain the identification information of the target;
2. the object identification recognition method provided by the invention only needs to paste the sticker on the recognition target, and the sticker is distinguished by adding the black and white blocks at the positions, so that the method is simple, convenient and easy to use, has strong transportability, can be applied to multiple fields of production and life, and greatly reduces the cost;
3. the method of median filtering and mean filtering adopted by the algorithm of the invention for positioning increases the reliability of data and optimizes the acquisition method of data;
4. the data fusion algorithm is very simple and convenient, the construction of the barrier is realized through three dimensions, the splicing of a large number of local coordinate systems is realized by adopting a feature point fusion method, and a barrier map is issued by adopting a ROS system in a form of occupying a grid map;
5. the invention relates to innovation of camera correction and precision, theoretically, a camera can be well positioned only by correcting distortion, in order to greatly reduce system complexity, the idea of only correcting key points is adopted, compared with the prior method of correcting 1280 multiplied by 1024 pixels, namely, only a few pixel points are corrected in each frame, and then corrected points are used for positioning and drawing;
6. the invention not only provides the positioning of the multi-agent under the large-scale environment, but also has the capacity of building the barrier map, and simultaneously can realize the building of the barrier map with the minimum cost.
Drawings
Fig. 1 is a flow chart of the novel positioning and mapping method of the present invention.
FIG. 2 is a schematic view of color lump paster of the present invention.
Fig. 3 is a schematic diagram of the distributed camera data processing of the present invention.
Fig. 4 is a system layout diagram of the multi-camera image stitching map of the present invention.
Fig. 5 is a gridded obstacle map constructed by the system of the present invention.
FIG. 6 is a diagram illustrating the relationship between coordinate systems according to the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1:
the invention provides a novel distributed multi-camera fusion positioning and mapping system, which comprises a color block identification module, a digital image processing module, a target identification positioning and obstacle identification module and a multi-camera image splicing mapping module, wherein the color block identification module is used for identifying a target;
the color block identification module is used for identifying an object by adopting a sticker with a black frame and a color block;
the digital image processing module is used for extracting each color block from the image through a Canny edge detection algorithm, a contour search algorithm and a range screening step, and capturing the image for image preprocessing;
the target identification and positioning and obstacle identification module is used for completing identification of the object blocks and distinguishing of obstacles by accessing and distance screening each point in the threshold range of the camera;
the multi-camera image splicing and mapping module realizes reconstruction of a large number of local coordinate systems by fusing the feature points in the transmitted multi-camera data images, realizes large-scale indoor mapping, and simultaneously constructs new positioning of the feature points in larger world coordinates to finish large-scale indoor positioning.
The module for identifying and positioning the target and identifying the obstacle further comprises a method for optimizing the acquired data by adopting median filtering and mean filtering.
Example 2
The invention provides a novel distributed multi-camera fusion positioning and mapping method, which comprises the following steps as shown in figure 1:
s1, color block identification: carrying out object identification by using black and white color block stickers as shown in figure 2;
s1.1, respectively placing three black color blocks with equal size at the middle positions of the left upper corner, the right upper corner and the bottom as an identification substrate;
s1.2, the novel mark design adopts a position plus digit method to identify and count color blocks in a mark substrate at a fixed position, the number and the position of the color blocks can be used as the most visual mode for distinguishing different targets, and simultaneously, in order to meet the requirement of multi-target positioning, a binary principle is utilized to give higher-level positioning codes, position plus digits and support 2 9 =512 simultaneous locations, and light colored numbers can be printed on them for clear identification by the naked eye.
S2, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outer points, thereby accurately extracting each color block in the image;
s2.1, acquiring a set of outer contour points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing the original obtained picture, and searching a connected domain outline by using a findContours function;
s2.2, extracting each color block from the image by adopting a Canny edge detection algorithm;
s2.3, camera correction and precision improvement: theoretically, the machine can be well positioned only by correcting distortion, in order to greatly reduce the complexity of a system, the idea of only correcting feature points is adopted, compared with the previous method of correcting 1280 x 1024 pixels, namely only few pixel points are corrected in each frame, then the corrected feature points are used for positioning and mapping, and meanwhile, in order to improve the accuracy of data and reduce uncertainty, a median filtering method and a mean filtering method are adopted to realize the optimization of the data.
S3, distinguishing the target identification and positioning from the obstacle: the method comprises the steps of obtaining a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the size identity of the color block, and cleaning out points, so that each color block in the image can be extracted more accurately. Obtaining a set of outline points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing an original obtained picture, searching a connected domain outline by using a findContours function to calculate the perimeter of different identified targets, comparing the obtained perimeter with a specific value, and distinguishing the targets from the obstacles, wherein if the obtained perimeter is larger than the specific value, the obstacles are used, and if the obtained perimeter is smaller than the specific value, the targets are used. The target is identified and positioned by positioning the outer frame firstly and then identifying specific points in the frame, namely identifying three points of the base, and the triangular positioning is realized by calculating the distance between different points, finding out the maximum distance between the points, and carrying out system processing and judgment to realize the positioning of the target. And finally, identifying the number of the marks. And each identification frame is identified, so that the information of each calibration frame can be extracted, and the positioning is realized. The system judges the object without the appointed mark or the object which can not be correctly identified in the image as the obstacle, and finally realizes the positioning of the obstacle. As shown in fig. 3, the white area is an obstacle area, and the gray block is a recognition result of the car logo.
S4, multi-camera image splicing and map building: in this functional block, a distributed architecture method is adopted, and each distributed camera carries out autonomous identification by a host of tx2, that is, each camera identifies a specific area. Meanwhile, large-scale indoor positioning and mapping generate a large amount of data, the fusion of the data is a complex problem, and the requirement on a host for processing the data is high so as to achieve higher timeliness. In order to solve the problem, a method for identifying image feature points, correcting fewer pixel points such as positioning feature points and the like, and finally realizing large-scale indoor positioning and mapping by fusing different feature points is adopted, wherein the image splicing method can generate the partial overlapping problem of different areas due to the fusion of different feature points. The method is only needed to be applied to three latitudes (x, y, z), and the actual fusion of a large number of images is also realized through simple translation and rotation, namely, the characteristic points of the different images are collected through splicing, and then a black and white barrier map with three latitudes (x, y, z) is output through an ROS system in a form of occupying a grid map. By a distributed architecture method, the multi-camera real-time data acquisition is carried out, and the data are summarized, fused and spliced, so that the system can realize higher precision and higher timeliness by a simple algorithm and lower performance requirements. Fig. 5 is a map of obstacles occupying a grid constructed by the system, as shown in fig. 4, a layout of the system.
For the positioning of the target, the coordinates of the target in the world coordinate system are obtained. Since the local coordinate systems of the different cameras with distributed architectures are different, when the problem of coordinate transformation is involved, we use the TF function in the ROS, namely coordinate transformation (TransForm), to complete the transformation. By selecting an origin reference point in the world map and then establishing a relationship with each local coordinate system, the well-defined relationships between a plurality of local coordinate systems ("base _ camera _ 0", "base _ camera _ 1", "base _ camera _ 2", etc.) and a world coordinate system ("base _ link") are stored in a transformation tree of tf, and the relationship between each kind of "base _ camera _ number" and "base _ link" is defined by tf and is made to manage the relationship between these coordinate systems. Conceptually, each node in the transformation tree corresponds to a coordinate system, and each direction corresponds to a transformation that needs to be applied to move from the current node to its child nodes. Tf uses a tree structure to ensure that only one traversal connects any two coordinate systems together and that the transformation from the coordinates of multiple local coordinate systems to the coordinates in the world coordinate system can be done assuming all directions in the tree are directed from parent nodes to child nodes. The relationship between the respective coordinate systems is shown in fig. 6.
The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (7)

1. The utility model provides a novel location of distributed polyphaser integration and build picture system which characterized in that: the device comprises a color block identification module, a digital image processing module, a target identification and obstacle identification module and a multi-camera image splicing and mapping module;
the color block identification module is used for identifying an object by adopting a sticker with a black frame and a color block;
the digital image processing module is used for extracting each color block from the image through a Canny edge detection algorithm, a contour search algorithm and a range screening step, and capturing the image for image preprocessing;
the target identification and positioning and obstacle identification module is used for completing identification of the object blocks and distinguishing of obstacles by accessing and distance screening each point in the threshold range of the camera;
the multi-camera image splicing and mapping module realizes reconstruction of a large number of local coordinate systems and large-scale indoor mapping by fusing the feature points in the transmitted multi-camera data images, and simultaneously constructs new positioning of the feature points in larger world coordinates to complete large-scale indoor positioning.
2. The distributed multi-camera fused novel localization and mapping system of claim 1, wherein: the module for identifying and positioning the target and identifying the obstacle further comprises optimizing the acquired data by adopting a median filtering method and a mean filtering method.
3. A novel positioning and mapping method for distributed multi-camera fusion, which is based on the novel positioning and mapping system for distributed multi-camera fusion of any one of claims 1-2, and is characterized in that: the method comprises the following steps:
s1, color block identification: carrying out object identification through the black and white color block paster;
s2, digital image processing: acquiring a set of outline points of a connected domain by using a findContours function of OpenCV, extracting each color block from an image by using a Canny edge detection algorithm, removing noise of the image color block by using a range exclusion method according to the identity of the size of the color block, and cleaning outer points, thereby accurately extracting each color block in the image;
s3, distinguishing the target identification and positioning from the obstacle: scanning an image, positioning an edge black frame, positioning a color block in the frame, analyzing the position and the quantity of the color block in the frame, identifying each identification frame, extracting information of each calibration frame and positioning;
s4, splicing and establishing a graph by multiple camera images: a plurality of industrial cameras of a distributed architecture are used as data collectors, the industrial cameras cover a target area and work independently, primary processing of collected data is achieved through a carried sub server after image data are collected, data of all the distributed cameras are gathered and fused in a main server, image splicing is achieved, and large-scale indoor positioning and image construction are achieved.
4. A novel method of distributed multi-camera fusion localization and mapping as claimed in claim 3, characterized in that: the S1 specifically includes the following steps:
s1.1, respectively placing three black color blocks with equal size at the middle positions of the left upper corner, the right upper corner and the bottom as an identification substrate;
s1.2, the novel mark design adopts a position plus digit method to identify and count color blocks in a mark substrate at a fixed position, the number and the position of the color blocks can be used as the most intuitive mode for distinguishing different targets, and simultaneously, in order to meet the requirement of multi-target positioning, a binary principle is utilized to provide higher informationLevel position coding, position plus number, support 2 9 =512 simultaneous locations, and light colored numbers can be printed on them for clear identification by the naked eye.
5. A novel method of distributed multi-camera fusion localization and mapping as claimed in claim 3, characterized in that: the S2 specifically includes the following steps:
s2.1, acquiring a set of outer contour points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing the original obtained picture, and searching a connected domain outline by using a findContours function;
and S2.2, extracting each color block from the image by adopting a Canny edge detection algorithm.
6. The novel distributed multi-camera fusion localization and mapping method of claim 5, wherein: the S2 further includes the steps of:
s2.3, camera correction and precision improvement: and correcting by adopting a mode of only correcting the characteristic points, and then positioning and drawing by using the corrected characteristic points.
7. A novel method for localization and mapping of distributed multi-camera fusion as claimed in claim 3, characterized in that: the barrier distinction in S3 is specifically: obtaining a set of outline points of the connected domain by using a findContours function of OpenCV: calling a cvtColor function and a threshold function, graying and binarizing an original obtained picture, searching a connected domain outline by using a findContours function to calculate the perimeter of different identified targets, comparing the obtained perimeter with a specific value, and distinguishing the targets from the obstacles, wherein if the obtained perimeter is larger than the specific value, the obstacles are used, and if the obtained perimeter is smaller than the specific value, the targets are used.
CN202210301061.6A 2022-03-24 2022-03-24 Novel distributed multi-camera fusion positioning and mapping system and method Pending CN114943771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210301061.6A CN114943771A (en) 2022-03-24 2022-03-24 Novel distributed multi-camera fusion positioning and mapping system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210301061.6A CN114943771A (en) 2022-03-24 2022-03-24 Novel distributed multi-camera fusion positioning and mapping system and method

Publications (1)

Publication Number Publication Date
CN114943771A true CN114943771A (en) 2022-08-26

Family

ID=82905801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210301061.6A Pending CN114943771A (en) 2022-03-24 2022-03-24 Novel distributed multi-camera fusion positioning and mapping system and method

Country Status (1)

Country Link
CN (1) CN114943771A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117549938A (en) * 2023-12-13 2024-02-13 中国铁道科学研究院集团有限公司 Intelligent diagnosis analysis system for recording lamp position state information of train control vehicle-mounted equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117549938A (en) * 2023-12-13 2024-02-13 中国铁道科学研究院集团有限公司 Intelligent diagnosis analysis system for recording lamp position state information of train control vehicle-mounted equipment

Similar Documents

Publication Publication Date Title
CN111583337B (en) Omnibearing obstacle detection method based on multi-sensor fusion
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
CN112767391B (en) Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image
CN103512579B (en) A kind of map constructing method based on thermal infrared video camera and laser range finder
Naimark et al. Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN112396656B (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN112184711A (en) Photovoltaic module defect detection and positioning method and system
CN105139350A (en) Ground real-time reconstruction processing system for unmanned aerial vehicle reconnaissance images
Borrmann et al. Mutual calibration for 3D thermal mapping
CN111968177A (en) Mobile robot positioning method based on fixed camera vision
CN113052903A (en) Vision and radar fusion positioning method for mobile robot
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN111856436A (en) Combined calibration device and calibration method for multi-line laser radar and infrared camera
CN112819895A (en) Camera calibration method and device
CN106403926B (en) Positioning method and system
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN110827361A (en) Camera group calibration method and device based on global calibration frame
CN112446927A (en) Combined calibration method, device and equipment for laser radar and camera and storage medium
CN114943771A (en) Novel distributed multi-camera fusion positioning and mapping system and method
CN115100292A (en) External parameter online calibration method between laser radar and camera in road environment
CN111862043A (en) Mushroom detection method based on laser and machine vision
CN114545426A (en) Positioning method, positioning device, mobile robot and computer readable medium
CN114639115A (en) 3D pedestrian detection method based on fusion of human body key points and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination