CN110263635B - Marker detection and identification method based on structural forest and PCANet - Google Patents

Marker detection and identification method based on structural forest and PCANet Download PDF

Info

Publication number
CN110263635B
CN110263635B CN201910396062.1A CN201910396062A CN110263635B CN 110263635 B CN110263635 B CN 110263635B CN 201910396062 A CN201910396062 A CN 201910396062A CN 110263635 B CN110263635 B CN 110263635B
Authority
CN
China
Prior art keywords
image
marker
forest
pcanet
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910396062.1A
Other languages
Chinese (zh)
Other versions
CN110263635A (en
Inventor
杨小冈
马玛双
卢瑞涛
李传祥
齐乃新
李维鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN201910396062.1A priority Critical patent/CN110263635B/en
Publication of CN110263635A publication Critical patent/CN110263635A/en
Application granted granted Critical
Publication of CN110263635B publication Critical patent/CN110263635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of automatic target identification, and discloses a marker detection and identification method based on a structural forest and PCANet, which comprises the steps of firstly detecting the edge structure of a pavement marker based on the structural forest; then, aiming at auxiliary lines and typical markers in the scene, extracting auxiliary line and corner characteristic regions by adopting a dynamic clustering algorithm based on skeleton extraction, and determining candidate regions of the typical markers by a maximum stable extremum region characteristic detection algorithm based on image enhancement processing; finally, the PCANet structure is adopted to carry out marker identification on the candidate area. The invention carries out structured mapping on the markers with unobvious edge structure characteristics, and adopts dynamic clustering and enhanced MSER characteristics for extraction for PCANet identification. The method can overcome the problems of unobvious marker contrast and small training data set, and has important significance in providing auxiliary information for drivers in real time.

Description

Marker detection and identification method based on structural forest and PCANet
Technical Field
The invention belongs to the technical field of automatic target identification, and particularly relates to a marker detection and identification method based on a structural forest and PCANet.
Background
The detection and identification of the road marker are important components of machine vision in the application field, can provide reliable target position information, realize the positioning function of a vehicle in a specific scene, and are widely applied to the fields of automatic driving, driver assistance systems, visual navigation and the like. In general, the detection and identification of markers can be described as: in a specific scene, extracting a marker feature structure in an image by adopting a specific image preprocessing method according to the acquired scene marker information, sending the extracted marker feature structure into a classifier for calculation, and finally obtaining an identification result. The specific working process is as follows:
(1) firstly, acquiring image information in a known scene, designing different image processing methods by combining the structural characteristics of markers in the image, reducing the interference of irrelevant areas in the image, and simultaneously enhancing the characteristic structure of the markers in the image, thereby facilitating the calculation of subsequent characteristic descriptors;
(2) then, in the process of feature extraction, designing an extraction method of a marker candidate region for the preprocessed image sequence according to a certain rule;
(3) and finally, carrying out template matching on the candidate region to be identified or sending the candidate region to a classifier for calculation so as to achieve the purpose of identifying the marker.
At present, methods for detecting and identifying markers can be divided into two types: a method based on traditional feature extraction and a method based on machine learning. Based on the traditional feature extraction method, a series of complex processing is adopted to obtain image gradient features according to the structural features of the marker, and the image gradient features are used for manufacturing a marker template, so that the defects of complex preparation work, complex calculation, low recognition efficiency and the like are often caused. The method mainly utilizes the form of the feature points and the feature descriptors, the performance of the algorithm is limited by the selection of the threshold under different illumination conditions, the universality is poor, the threshold of the algorithm is often required to be adjusted according to the environment in the actual operation process, and the feature extraction is complicated. Meanwhile, the accuracy of algorithm identification is related to the number of template matching, in order to improve the accuracy of feature extraction, the cost of increasing feature dimension or adopting a multi-scale template is usually used, the calculation complexity is high, and the identification accuracy is low. The method based on machine learning is used for detecting and identifying the markers by constructing a large number of data sets and optimizing a training strategy, so that the identification accuracy is greatly improved, and the conditions of large training data set and non-ideal real-time performance are brought. In order to achieve a better recognition and classification effect, a large number of marker images need to be acquired, a training set and a test set of markers are manufactured, so that a training data set is large, and different training strategies need to be formulated for optimizing network parameters. Meanwhile, because the feature extraction adopts a multilayer convolution architecture, the feature extraction has high dimensionality, high computational complexity and unsatisfactory real-time performance. Therefore, a lightweight network architecture is needed to be researched for identifying the markers, which is performed in the aspects of reducing the training data set and improving the real-time performance, so that the detection and identification of the markers are simply and efficiently realized, and corresponding auxiliary information is provided for the vehicles.
In summary, the problems of the prior art are as follows: the existing feature extraction and identification method excessively depends on selection of feature descriptor parameters, preparation of a data set, complex calculation and low identification efficiency.
The difficulty of solving the technical problems is as follows: when a vehicle arrives at a preset place, a pavement marker needs to be identified, and a person is assisted in making a decision. Pavement marker detection and identification is primarily directed to markers on the surface of a roadway, including: lane lines, arrows, line markers, area markers, optical characters, etc. For complex and special application scenes, considering external influence factors such as illumination change, shadow, marker shielding, image distortion and the like, the problem of how to accurately detect and identify the auxiliary lines and the area markers in real time still remains to be difficult.
The significance of solving the technical problems is as follows: the method aims to solve the problems that the conventional method seriously describes the selection of sub-parameters according to characteristics, has a large training data set, is complex in calculation, has poor instantaneity and the like. In the aspect of feature extraction, a processing method for enhancing the comparison of the feature structures of the markers is needed to improve the stability of the feature points and the feature descriptors. Meanwhile, in the stage of marker identification, a lightweight network architecture is adopted, and on the basis of ensuring the accuracy of marker identification, the real-time performance of marker identification is improved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a marker detection and identification method based on structural forests and PCANet.
The invention is realized in such a way, and the method for detecting and identifying the marker based on the structural forest and the PCANet comprises the following steps:
firstly, preprocessing an image; acquiring a video image sequence, and carrying out distortion correction on the image by combining internal parameters of a camera; obtaining a mapping image of an image edge structure by adopting an edge detection algorithm based on a structural forest;
secondly, extracting a candidate region; for the extraction of auxiliary lines and corner regions, on the basis of the obtained edge structure mapping image, extracting the skeleton of the auxiliary lines by a K3M sequential iteration method through a dynamic clustering algorithm based on skeleton extraction, performing clustering analysis on the straight lines in a Hough space, and if the straight lines are judged to be the inner points of a certain type of straight line clusters, updating the straight lines in the type of the existing straight line clusters; if the straight line is judged to be the outer point of the straight line cluster, the category and the number of the straight line cluster are updated simultaneously, the least square algorithm is utilized to fit the auxiliary line, and the intersection point between the straight lines is solved to be used as the angular point area. For typical marker area extraction, in order to enhance the edge structure difference between a marker area and a background, enhancement processing is carried out on the edge structures of the background and the marker in an edge structure mapping image according to a formula; extracting the maximum stable extremum region in the image by adopting an MSER characteristic detector, and taking the maximum stable extremum region as a candidate region of the marker if the maximum stable extremum region meets the set condition; otherwise, as the interference area, deleting the area; the formula for enhancing the edge structures of the background and the marker in the edge structure mapping image is as follows:
I bd =I gray -I edge
I db =(1-I edge )-I gray
in the formula I gray As a grey-scale map of the input image, I bd As a result of enhancing edges in the image that are brighter than other image areas, I db The result of enhancing the edge of the image with lower brightness than other image areas;
thirdly, identifying the marker; respectively calculating the binaryzation hash codes of the candidate regions according to the generated corner points and the candidate regions of the markers to obtain the features of the extended histogram; and (4) carrying out classification and identification by adopting a classifier of a pre-trained PCANet structure.
Further, the first step specifically includes:
the random forest is composed of N independent decision trees T i (x) Each decision tree is composed of hierarchical nodes. For each decision tree T i (x) Given its corresponding training set
Figure GDA0002159503020000041
Gain according to node separation information I j The maximum principle, the separation function can be determined as:
Figure GDA0002159503020000042
Figure GDA0002159503020000043
in the formula (I), the compound is shown in the specification,
Figure GDA0002159503020000044
wherein theta is j Parameters to maximize information gain; theta.theta. j K is a certain quantized feature of x, and γ is a threshold value corresponding to the feature; separating functions by recursively training nodes
Figure GDA0002159503020000045
And
Figure GDA0002159503020000046
until a predetermined decision tree depth or information gain threshold is reached; the information gain is defined as:
Figure GDA0002159503020000047
Figure GDA0002159503020000048
in the formula, H (S) j ) Is Shannon entropy, p y For training data set S j Probability of corresponding output label y;
calculation of I j For each node j, mapping all labels y in the node to discretization labels c, and adopting c to replace y to calculate I j
Figure GDA0002159503020000049
And the output of the structured random forest maps the high-dimensional output label y into a binary vector. In order to reduce the calculation amount, judging whether the nodes with similar output labels y belong to the same marker by adopting a K-means clustering or a dimensionality reduction quantification method of principal component analysis, and giving specific label numbers C (1,2, …, K) of the nodes; in the training process, the BSDS500 is used as a training set of the structured random forest to obtain an edge detection model based on the structured forest for edge detection.
Further, the second step specifically includes: adopting a K3M sequential iteration algorithm, wherein the algorithm comprises two steps of extracting a pseudo skeleton, carrying out connectivity analysis on the neighborhood of pixel points according to a fixed rotation direction, removing the outer contour of a marker through continuous iterative corrosion, and extracting the pseudo skeleton with the pixel width of 2; then, carrying out weight coding on 8 neighborhoods of all pixel points on the pseudo skeleton, and extracting a real skeleton of the image;
extracting the skeleton to obtain a linear cluster l j The coordinate converted into Hough space is (rho) jj ) Performing linear clustering on the data, wherein theta belongs to [0,180 DEG ]); dividing theta into 180 small intervals at equal intervals, voting all angles, and then calculating the sum of the lengths of straight lines of each class:
Figure GDA0002159503020000051
wherein n is a straight lineNumber of clusters, m is the number of linear clusters of each class, length j Indicates the length of the detected straight line when length j When the line length is larger than lambda, all the straight line clusters in the interval are used as new straight lines for clustering, and lambda is the minimum threshold value of the auxiliary line length;
after the image is subjected to Gaussian filtering, a structural forest training model is adopted, and a mapping chart of the structural edge can be obtained; MSER characteristics have a good extraction effect on areas with brightness higher than ambient brightness or lower than ambient brightness; after the edge structure in the image is enhanced, the MSER characteristics are adopted to extract the maximum stable extremum region, and the image with the enhanced edge is represented as follows:
I bd =I gray -I edge
I db =(1-I edge )-I gray
in the formula I gray As a grey-scale map of the input image, I bd As a result of enhancing edges in the image that are brighter than other image areas, I db As a result of enhancing the edges of the image that are less bright than the other image areas.
Further, after the image enhancement processing, the region edges of the markers and the background of the image have obvious differences; the MSER characteristic detector is adopted to extract pixel points with similar color information in the region, and an extremum stable region in the image is extracted; meanwhile, the number of candidate regions can be reduced by setting relevant limiting conditions of the markers; the constraint conditions are as follows:
(a) and (3) area ratio constraint: in the MSER region, the ratio of the filling area of all the pixel points to the minimum circumscribed rectangle of the candidate region is used for removing curve lane markers in the image;
(b) aspect ratio constraint: the width and height ratio of the minimum circumscribed rectangle of the candidate area is used for removing the fine crack area in the image;
(c) and (3) width constraint: the width limit range of the marker candidate region is set as the percentage of the image width;
(d) height constraint: the height limit range of the marker candidate is set as a percentage of the image height.
Further, the third step is realized by the following steps: for an input image, the PCANet structure includes operations of zero averaging and a PCA filter, a mapping matrix of image features is formed by solving the first k eigenvectors, and the PCA filter of the first layer is expressed as:
Figure GDA0002159503020000061
in the formula, k 1 ×k 2 Is the size of the sliding window, q l (XX T ) To find the first n feature vectors, W, of the image features l 1 Is the front L of the covariance matrix 1 A feature mapping matrix corresponding to the maximum feature value; after passing through two layers of PCA filters, performing binary Hash coding on the output value, and outputting the number of coded bits and the number L of filters on the second layer 2 The same, expressed as:
Figure GDA0002159503020000062
dividing the matrix output by the first layer into B blocks, counting the Hash code values of the B blocks, and cascading histogram features of the B blocks to form extended histogram features which represent the features extracted from the image:
Figure GDA0002159503020000063
in the formula (I), the compound is shown in the specification,
Figure GDA0002159503020000064
features representing sub-histograms outputting dimensions of
Figure GDA0002159503020000065
The vector of (2).
Another object of the present invention is to provide an automatic driving control system applying the structural forest and PCANet based marker detection and identification method.
Another object of the present invention is to provide a driver assistance system applying the structural forest and PCANet based marker detection and recognition method.
The invention further aims to provide a visual navigation control system applying the structural forest and PCANet-based marker detection and identification method.
In summary, the advantages and positive effects of the invention are: the invention designs a marker detection and identification method based on structural forest and PCANet, aiming at the problems of unobvious feature structure comparison, small training data set and non-ideal real-time property in the detection and identification of pavement markers in special scenes. The method comprises the steps of obtaining edge structure characteristics in an image by adopting an edge detection algorithm based on a structured random forest, obtaining auxiliary line and corner candidate regions by utilizing a dynamic clustering algorithm, obtaining a maximum stable extremum region by adopting an image enhancement algorithm, and finally identifying the candidate regions by adopting a lightweight network structure PCANet. The method can overcome the problems of unobvious marker contrast and small training data set, and has important significance in providing auxiliary information for drivers in real time.
Drawings
Fig. 1 is a flowchart of a marker detection and identification method based on structural forests and pcanets according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of detection and identification of markers based on structural forests and pcanets according to an embodiment of the present invention.
Fig. 3 is a flow chart of an implementation of the method for detecting and identifying markers based on structural forests and pcanets, provided by the embodiment of the invention.
FIG. 4 is a diagram illustrating processing results at various stages according to an embodiment of the present invention;
(a) the original image (b) is based on the edge detection (c) corner points of the structure forest and the identification result of the candidate area (d) marker.
FIG. 5 is a graph of comparative results provided by examples of the present invention;
in the figure: (a) comparing the experimental results of algorithm 1; (b) comparing the experimental results of algorithm 2; (c) comparing the experimental results of algorithm 3; (d) experimental results of the algorithm of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The invention aims to overcome the defects that the characteristic extraction and identification method excessively depends on the preparation of a data set, the calculation is complex, the identification efficiency is low and the like; aiming at the problem of detecting and identifying the pavement markers in a special scene, a method for detecting and identifying the markers based on a structural forest and a PCANet structure is provided. Firstly, detecting the edge structure of the pavement marker based on a structural forest; then, aiming at auxiliary lines and typical markers in the scene, extracting auxiliary line and corner characteristic regions by adopting a dynamic clustering algorithm based on skeleton extraction, and determining candidate regions of the typical markers by a maximum stable extremum region characteristic detection algorithm based on image enhancement processing; finally, the PCANet structure is adopted to carry out marker identification on the candidate area.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the method for detecting and identifying markers based on structural forests and pcanets provided by the embodiment of the present invention includes the following steps:
s101: by an image preprocessing stage, aiming at the problem that the edge structure characteristics of the marker are not obvious in a complex scene, detecting the edge characteristics of a target by adopting a structured random forest;
s102: in the generation stage of the candidate region, extracting the candidate region of the marker in the image by respectively adopting a dynamic clustering algorithm and an improved maximum stable extremum region characteristic extraction algorithm according to the obtained edge structure mapping map;
s103: and according to the actual scene target, a target training set is manufactured to train the PCANet, and the candidate marker area is identified.
The marker detection and identification method based on the structural forest and the PCANet specifically comprises the following steps:
(1) preprocessing an image; firstly, acquiring a video image sequence, carrying out distortion correction on the image by combining internal parameters of a camera, and then, obtaining a mapping image of an image edge structure by adopting an edge detection algorithm based on a structural forest;
(2) and extracting a candidate region. For the extraction of auxiliary lines and corner regions, on the basis of the obtained edge structure mapping image, extracting the skeleton of the auxiliary lines by a K3M sequential iteration method through a dynamic clustering algorithm based on skeleton extraction, performing clustering analysis on the straight lines in a Hough space, and if the straight lines are judged to be the inner points of a certain type of straight line clusters, updating the straight lines in the type of the existing straight line clusters; if the straight line is judged as the outer point of the straight line cluster, the category and the number of the straight line cluster are updated at the same time, an auxiliary line is fitted by utilizing a least square algorithm, and the intersection point between the straight lines is solved to be used as an angular point area. For typical marker area extraction, in order to enhance the edge structure difference between a marker area and a background, enhancement processing is performed on the background in an edge structure mapping image and the edge structure of a marker according to formulas (7) and (8), then an MSER feature detector is adopted to extract the maximum stable extremum area in the image, and if the set conditions are met, the extracted maximum stable extremum area is used as a candidate area of the marker; otherwise, as the interference area, deleting the area;
(3) identification of the marker; and respectively calculating the binaryzation hash codes of the candidate areas according to the generated corner points and the marker candidate areas to obtain the feature of the expanded histogram, and then classifying and identifying by adopting a classifier with a pre-trained PCANet structure.
The application of the principles of the present invention will now be described in further detail with reference to the accompanying drawings.
According to the marker detection and identification method based on the structural forest and the PCANet, provided by the embodiment of the invention, the implementation flow of an algorithm is shown in FIG. 3, and the method comprises the following steps:
the method comprises the following steps: acquiring a video image sequence, carrying out distortion correction on the image by combining internal parameters of a camera, and then obtaining a mapping image of an image edge structure by adopting an edge detection algorithm based on a structural forest;
step two: aiming at auxiliary lines and typical markers in a scene, extracting auxiliary line and corner feature candidate regions by adopting a dynamic clustering algorithm based on skeleton extraction on the basis of an image edge structure mapping map, and determining the candidate regions of the typical markers by adopting a maximum stable extremum region feature detection algorithm based on image enhancement processing;
step three: and respectively calculating the binary Hash codes of the candidate regions according to the generated marker candidate regions to obtain the features of the expanded histogram, and then classifying and identifying by adopting a classifier of a pre-trained PCANet structure.
The invention has the further improvement that the specific implementation steps of the first step are as follows:
the random forest is composed of N independent decision trees T i (x) Each decision tree is composed of hierarchical nodes. For each decision tree T i (x) Given its corresponding training set
Figure GDA0002159503020000091
Gain according to node separation information I j The maximum principle, the separation function can be determined as:
Figure GDA0002159503020000092
Figure GDA0002159503020000093
in the formula (I), the compound is shown in the specification,
Figure GDA0002159503020000094
wherein theta is j A parameter to maximize information gain; theta j K is a certain quantized feature of x, and γ is a threshold value corresponding to the feature. Separating functions by recursively training nodes
Figure GDA0002159503020000095
And
Figure GDA0002159503020000096
until a predetermined decision tree depth or threshold of information gain is reached. The information gain may be defined as:
Figure GDA0002159503020000097
Figure GDA0002159503020000101
in the formula, H (S) j ) Is Shannon entropy, p y For training data set S j Corresponding to the probability of outputting label y.
To facilitate the calculation of I j For each node j, mapping all labels y in the node to discretization labels c, and calculating I by using c instead of y j
Figure GDA0002159503020000102
And the output of the structured random forest maps the high-dimensional output label y into a binary vector. In order to reduce the calculation amount, a dimension reduction quantification method of K-means clustering or principal component analysis is adopted to judge whether nodes with similar output labels y belong to the same marker, and specific label numbers C (1,2, …, K) of the nodes are given. In the training process, BSDS500 is used as a training set of the structured random forest to obtain an edge detection model based on the structured forest for edge detection.
The invention has the further improvement that the concrete implementation steps of the step two are as follows:
after the binarization processing is performed on the result of the image edge detection, the influence of the problems of noise, discontinuity of the marker contour edge and the like can be brought, and a morphological filtering method is required to be adopted to perform the processing of expansion, denoising, smoothing and the like on the image so as to obtain a relatively complete marker contour region. In order to facilitate the expression of the position and the direction of the auxiliary line, thinning operation is adopted to extract a skeleton from the morphologically filtered image. The method adopts a K3M sequential iteration algorithm which is divided into two steps, firstly, a pseudo skeleton is extracted, connectivity analysis is carried out on the neighborhood of pixel points according to a fixed rotation direction, the outer contour of a marker is removed through continuous iterative corrosion, and the pseudo skeleton with the pixel width of 2 is extracted; then, the 8 neighborhoods of all the pixel points on the pseudo skeleton are subjected to weight coding, and the real skeleton of the image is extracted.
On the basis of skeleton extraction, due to the fact that the number of points in an image is small, the straight line where the skeleton is located can be efficiently detected through HoughP conversion. However, the points on the skeleton extracted by the same auxiliary line are not strictly on the same straight line, and the straight line clusters corresponding to the same auxiliary line need to be clustered. Due to the uncertainty of the number and direction of the straight lines in the scene, a dynamic clustering algorithm is required. The invention extracts the skeleton to obtain a linear cluster l j The coordinate converted into Hough space is (rho) jj ) It is subjected to straight-line clustering, where θ ∈ [0,180 °). Here, θ is equally divided into 180 small sections, votes are cast for all angles, and then the sum of the lengths of the straight lines of each class is calculated:
Figure GDA0002159503020000111
in the formula, n is the number of linear clusters, m is the number of each linear cluster, length j Indicates the length of the detected straight line when length j And when the line length is more than lambda, clustering all the line clusters in the interval as new lines, wherein lambda is the minimum threshold of the auxiliary line length.
Improved MSER feature extraction. Under different illumination conditions, the brightness of the marker can be changed, and after the image is subjected to Gaussian filtering, a structural forest training model is adopted, so that a structural edge mapping map can be obtained. While MSER features have a better extraction effect for regions above or below ambient brightness. Therefore, after the edge structure in the image is enhanced, the MSER characteristic is adopted to extract the maximum stable extremum region. For edge enhanced images can be expressed as:
I bd =I gray -I edge (7)
I db =(1-I edge )-I gray (8)
in the formula I gray As a grey-scale map of the input image, I bd As a result of enhancing edges in the image that are brighter than other image areas, I db As a result of enhancing the edges of the image that are less bright than the other image areas.
After the image enhancement processing, there may be a significant difference between the region edges of the markers and the background of the image. The MSER characteristic detector can be used for extracting pixel points with similar color information in the region, and an extreme value stable region in the image is extracted. Meanwhile, by setting the relevant restriction conditions of the markers, the number of candidate regions can be reduced. As shown in table 1, the constraints are:
(a) and (3) area ratio constraint: in the MSER region, the ratio of the filling area of all the pixel points to the minimum circumscribed rectangle of the candidate region is mainly used for removing curve lane markers in the image;
(b) aspect ratio constraint: the width and height ratio of the minimum circumscribed rectangle of the candidate area is mainly used for removing a fine crack area in the image;
(c) and (3) width constraint: the width limit range of the marker candidate region is set as the percentage of the image width;
(d) height constraint: the height limit range of the marker candidate is set as a percentage of the image height.
The invention has the further improvement that the concrete implementation steps of the third step are as follows:
the PCANet classifier is composed of a PCANet structure and a one-to-many SVM classifier. Different from a general deep learning network structure, PCANet adopts a Principal Component Analysis (PCA) network structure to replace a convolution layer of a deep learning network, a nonlinear layer adopts binary hash coding, a pooling layer adopts block histograms, and the block histograms are cascaded to form an extended histogram feature of an image. For an input image, the PCA structure generally includes operations of zero averaging and PCA filter, and by solving the first k eigenvectors to form a mapping matrix of image features, the PCA filter of the first layer can be expressed as:
Figure GDA0002159503020000121
in the formula, k 1 ×k 2 Is the size of the sliding window, q l (XX T ) To find the first n feature vectors, W, of the image features l 1 Is the front L of the covariance matrix 1 And the feature mapping matrix corresponding to the maximum feature value. After passing through two layers of PCA filters, performing binary Hash coding on the output value, and outputting the number of coded bits and the number L of filters on the second layer 2 Also, it can be expressed as:
Figure GDA0002159503020000122
dividing the matrix output by the first layer into B blocks, counting the Hash code values of the B blocks, and cascading histogram features of the B blocks to form extended histogram features which represent the features extracted from the image:
Figure GDA0002159503020000123
in the formula (I), the compound is shown in the specification,
Figure GDA0002159503020000124
features representing sub-histograms outputting dimensions of
Figure GDA0002159503020000125
The vector of (2).
And (6) optimizing parameters. For the PCANet structure, the selection of parameters needs to be based on specific target tasks, including the number of convolution layers, the size and number of filters, the step length of the sliding window, and other parameters. In an experiment, the identification effect of the two-layer PCA filter structure is higher than that of a single-layer PCA filter structure, but the identification accuracy cannot be greatly improved by continuously increasing the number of layers of the PCA filter. Thus, the present invention selects a two-layer PCA filter configuration. Meanwhile, the larger the number of filters is, the better the PCANet recognition effect is. In consideration of the marker category in the practical application scene, 8 PCA filters are selected in each layer, so that the identification requirement can be met. The application scenario of PCANet differs depending on the area ratio of the sliding window overlap region, and the region overlap area ratio is set to 0.5 here.
And (5) training a model. According to the method, the image training set of the markers in the scene is made according to the actual application scene. The image training data set mainly comprises 9 classes of targets, wherein: the region sample of 8 types of markers comprises corner regions of a circular marker and an auxiliary line in a scene; and the type 1 is a negative sample, the region is randomly segmented according to the acquired image, and manual screening is carried out to ensure that the marker sample is not segmented. Meanwhile, in order to increase the diversity of samples, processing steps such as mirroring, distortion, angle adjustment and the like are added to the images in the data set. In the process of training the PCANet, the markers with similar structural features are subjected to combined training and recognition. The image training set is composed of 400 images, each category is composed of 100 marker images, and the image training set can be mainly divided into four categories: a circular marker, a "+" shaped marker, a "" -shaped marker, and a negative example.
The effect of the present invention will be described in detail with reference to the experiments.
In order to verify the effectiveness of the invention, the effectiveness, the accuracy and the real-time performance of the algorithm are analyzed and verified respectively, the operating environment of the program in the experiment is VS2013, an Opencv2.4.10 image processing library is configured, and the size of the used image is 320 multiplied by 240.
(1) Algorithm validity verification
By adopting the algorithm, the detection and identification of the marker are carried out on the test data set of the image, and the experimental result is shown in FIG. 4. Image 4(a) is a sequence of images of different markers acquired in a scene; fig. 4(b) is an edge detection result based on a structural forest, and it can be seen that the interference of non-structural features such as speckles in the background can be effectively removed by using the algorithm; fig. 4(c) is candidate regions of extracted auxiliary line corner points and typical markers, including candidate regions generated by interference factors such as wheels and ground cracks in the background, where the auxiliary lines are represented by blue straight lines, the auxiliary line corner points are represented by green dots, and the generated candidate regions of the markers are represented by green rectangular boxes, and from the result generated by the candidate regions, most of the corner points and marker regions can be effectively extracted, but the edges and corner regions in a part of scenes cannot be effectively extracted, which is related to the response threshold of the auxiliary lines in the dynamic clustering process; fig. 4(d) shows the result of marker identification, and for the extracted candidate region, the PCANet can effectively identify the interference region and the marker region by performing binarization hash coding and principal component analysis on the corresponding candidate region, and the region identified as the marker is represented by a yellow rectangular box.
(2) Performance comparison validation
4 comparison algorithms were constructed as shown in Table 1. Comparing the algorithm 1, not adopting a classifier based on a structural forest and a PCANet structure, because Hougp line detection cannot effectively extract a background line, adopting line clustering and MSER feature extraction of LSD detection, and adopting a classifier constructed by HOG features and SVM for marker identification; a comparison algorithm 2, image enhancement processing based on the structural forest is adopted, and the marker identification is carried out by adopting a classifier constructed by HOG characteristics and SVM; comparing an algorithm 3, namely, not adopting image enhancement based on a structural forest, but adopting a traditional LSD algorithm to perform dynamic clustering on a linear cluster, and adopting a classifier based on a PCANet structure for marker identification; the algorithm of the invention adopts a structure based on structural forest and PCANet structure to detect and identify the marker. The experimental comparison results are shown in fig. 5.
In tables 2 to 5, the accuracy, recall ratio, comprehensive evaluation index and average time consumption experimental statistical results of the algorithm and the comparison algorithm are respectively given. According to experimental results, in the process of detecting and identifying the marker, the average accuracy of the algorithm reaches 91.63%, the comprehensive evaluation index is 93.39%, the accuracy and the comprehensive evaluation index are higher than those of a comparison algorithm, and the algorithm has better robustness. Moreover, the average time consumption of a single frame of the algorithm is 51.41ms, which is higher than that of the other three comparison algorithms, and the real-time requirement is met.
TABLE 1 candidate region constraints
Figure GDA0002159503020000144
Table 2 structure of comparison algorithm in experiment
Figure GDA0002159503020000141
TABLE 3 statistical results of accuracy P in the experimental results
Figure GDA0002159503020000142
TABLE 4 statistical results of recall R in the experimental results
Figure GDA0002159503020000143
Statistical results of comprehensive evaluation index F in the Experimental results of Table 5
Figure GDA0002159503020000151
TABLE 6 statistics of single frame time consumption for each algorithm
Figure GDA0002159503020000152
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A marker detection and identification method based on a structural forest and PCANet is characterized by comprising the following steps:
firstly, preprocessing an image; acquiring a video image sequence, and carrying out distortion correction on the image by combining internal parameters of a camera; obtaining a mapping image of an image edge structure by adopting an edge detection algorithm based on a structural forest;
secondly, extracting a candidate region; for the extraction of auxiliary lines and angular point regions, on the basis of the obtained edge structure mapping image, extracting the skeletons of the auxiliary lines by adopting a K3M sequential iteration method through a dynamic clustering algorithm based on skeleton extraction, performing clustering analysis on the lines in Hough space, and updating in the category of the existing line cluster if the lines are judged as inner points of a certain category of line clusters; if the straight line is judged to be an external point of the straight line cluster, updating the category and the number of the straight line cluster at the same time, fitting an auxiliary line by using a least square algorithm, and solving an intersection point between the straight lines as an angular point area; for typical marker region extraction, in order to enhance the edge structure difference between a marker region and a background, the edge structures of the background and the marker in an edge structure mapping image are enhanced according to a formula; extracting the maximum stable extremum region in the image by adopting an MSER characteristic detector, and taking the maximum stable extremum region as a candidate region of the marker if the maximum stable extremum region meets the set condition; otherwise, as the interference area, deleting the area; the formula for enhancing the edge structures of the background and the marker in the edge structure mapping image is as follows:
I bd =I gray -I edge
I db =(1-I edge )-I gray
in the formula I gray As a grey-scale map of the input image, I bd As a result of enhancing edges in the image that are brighter than other image areas, I db For brightness in image lower than other imagesThe result of enhancing the edge of the image area;
thirdly, identifying the marker; respectively calculating the binaryzation hash codes of the candidate regions according to the generated corner points and the candidate regions of the markers to obtain the features of the extended histogram; and (4) carrying out classification and identification by adopting a classifier of a pre-trained PCANet structure.
2. The structural forest and PCANet based marker detection and identification method as claimed in claim 1, wherein the first step specifically comprises:
the random forest is composed of N independent decision trees T i (x) Each decision tree consists of layered nodes; for each decision tree T i (x) Given its corresponding training set
Figure FDA0002058131230000021
Gain according to node separation information I j The maximum principle, the separation function can be determined as:
Figure FDA0002058131230000022
Figure FDA0002058131230000023
in the formula (I), the compound is shown in the specification,
Figure FDA0002058131230000024
wherein theta is j Parameters to maximize information gain; theta j K is a certain quantized feature of x, and γ is a threshold corresponding to the feature; separating functions by recursively training nodes
Figure FDA0002058131230000025
And
Figure FDA0002058131230000026
until a predetermined decision tree depth or information gain threshold is reached; the information gain is defined as:
Figure FDA0002058131230000027
Figure FDA0002058131230000028
in the formula, H (S) j ) Is Shannon entropy, p y For training data set S j Probability of corresponding output label y;
calculating I j For each node j, mapping all labels y in the node to discretization labels c, and adopting c to replace y to calculate I j
π:y∈Ya c∈C{1,2,L,k}
Outputting the structured random forest by mapping a high-dimensional output label y into a binary vector; judging whether the nodes with similar output labels y belong to the same marker by adopting a K-means clustering or principal component analysis dimension reduction quantization method, and giving specific label numbers C (1,2, L, K) of the nodes; in the training process, BSDS500 is used as a training set of the structured random forest to obtain an edge detection model based on the structured forest for edge detection.
3. The structural forest and PCANet based marker detection and identification method as claimed in claim 1, wherein the second step specifically comprises: a K3M sequential iteration algorithm is adopted, the algorithm is divided into two steps, firstly, a pseudo skeleton is extracted, connectivity analysis is carried out on the neighborhood of pixel points according to a fixed rotation direction, the outer contour of a marker is removed through continuous iterative corrosion, and the pseudo skeleton with the pixel width of 2 is extracted; then, carrying out weight coding on 8 neighborhoods of all pixel points on the pseudo skeleton, and extracting a real skeleton of the image;
extracting the skeleton to obtain a linear cluster l j The coordinate converted into Hough space is (rho) jj ) Performing linear clustering on the data, wherein theta belongs to [0,180 DEG ]; dividing theta into 180 small intervals at equal intervals, voting all angles, and then calculating the sum of the lengths of straight lines of each class:
Figure FDA0002058131230000031
in the formula, n is the number of linear clusters, m is the number of each linear cluster, length j Indicates the length of the detected straight line when length j When the line length is larger than lambda, all the straight line clusters in the interval are used as new straight lines for clustering, and lambda is the minimum threshold value of the auxiliary line length;
after the image is subjected to Gaussian filtering, a structural forest training model is adopted, and a mapping chart of the structural edge can be obtained; the MSER characteristics have better extraction effect on areas higher than ambient brightness or lower than the ambient brightness; after the edge structure in the image is enhanced, the MSER characteristics are adopted to extract the maximum stable extremum region, and the image with the enhanced edge is represented as follows:
I bd =I gray -I edge
I db =(1-I edge )-I gray
in the formula I gray As a grey-scale map of the input image, I bd As a result of enhancing edges in the image that are brighter than other image areas, I db As a result of enhancing the edges of the image that are less bright than the other image areas.
4. The structural forest and PCANet based marker detection and identification method as claimed in claim 3, wherein after the image enhancement processing, there is a significant difference between the regional edges of the markers and the background of the image; the MSER characteristic detector can be used for extracting pixel points with similar color information in the region, and an extreme value stable region in the image is extracted; meanwhile, the number of candidate regions can be reduced by setting relevant limiting conditions of the markers; the constraint conditions are as follows:
(a) area ratio constraint: in the MSER region, the ratio of the filling area of all the pixel points to the minimum circumscribed rectangle of the candidate region is used for removing curve lane markers in the image;
(b) aspect ratio constraint: the width and height ratio of the minimum circumscribed rectangle of the candidate area is used for removing the fine crack area in the image;
(c) and (3) width constraint: a width-limited range of the marker candidate set as a percentage of the image width;
(d) height constraint: the height limit range of the marker candidate is set as a percentage of the image height.
5. The structural forest and PCANet-based marker detection and identification method as claimed in claim 2, wherein the third step is implemented by the following steps: for an input image, the PCANet structure includes operations of zero averaging and a PCA filter, a mapping matrix of image features is formed by solving the first k eigenvectors, and the PCA filter of the first layer is expressed as:
Figure FDA0002058131230000041
in the formula, k 1 ×k 2 Is the size of the sliding window, q l (XX T ) To find the first n feature vectors, W, of the image features l 1 Is the front L of the covariance matrix 1 A feature mapping matrix corresponding to the maximum feature value; after passing through two layers of PCA filters, performing binary Hash coding on the output value, and outputting the number of coded bits and the number L of filters on the second layer 2 The same, expressed as:
Figure FDA0002058131230000042
dividing the matrix output by the first layer into B blocks, counting the Hash code values of the B blocks, and cascading histogram features of the B blocks to form extended histogram features which represent the features extracted from the image:
Figure FDA0002058131230000043
in the formula (I), the compound is shown in the specification,
Figure FDA0002058131230000044
features representing sub-histograms outputting dimensions of
Figure FDA0002058131230000045
The vector of (2).
6. An automatic driving control system applying the structural forest and PCANet based marker detection and identification method as claimed in any one of claims 1-5.
7. A driver assistance system applying the structural forest and PCANet based marker detection and identification method as claimed in any one of claims 1-5.
8. A visual navigation control system applying the structural forest and PCANet based marker detection and identification method as claimed in any one of claims 1-5.
CN201910396062.1A 2019-05-14 2019-05-14 Marker detection and identification method based on structural forest and PCANet Active CN110263635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910396062.1A CN110263635B (en) 2019-05-14 2019-05-14 Marker detection and identification method based on structural forest and PCANet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910396062.1A CN110263635B (en) 2019-05-14 2019-05-14 Marker detection and identification method based on structural forest and PCANet

Publications (2)

Publication Number Publication Date
CN110263635A CN110263635A (en) 2019-09-20
CN110263635B true CN110263635B (en) 2022-09-09

Family

ID=67913193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910396062.1A Active CN110263635B (en) 2019-05-14 2019-05-14 Marker detection and identification method based on structural forest and PCANet

Country Status (1)

Country Link
CN (1) CN110263635B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783604A (en) * 2020-06-24 2020-10-16 中国第一汽车股份有限公司 Vehicle control method, device and equipment based on target identification and vehicle
CN112052723A (en) * 2020-07-23 2020-12-08 深圳市玩瞳科技有限公司 Literacy card, and desktop scene STR method and device based on image recognition
CN113065428A (en) * 2021-03-21 2021-07-02 北京工业大学 Automatic driving target identification method based on feature selection
CN113971697B (en) * 2021-09-16 2024-08-02 中国人民解放军火箭军工程大学 Air-ground cooperative vehicle positioning and orientation method
CN114283144B (en) * 2022-03-06 2022-05-17 山东金有粮脱皮制粉设备有限公司 Intelligent control method for stable operation of corncob crusher based on image recognition
CN116243353B (en) * 2023-03-14 2024-02-27 广西壮族自治区自然资源遥感院 Forest right investigation and measurement method and system based on Beidou positioning
CN116993740B (en) * 2023-09-28 2023-12-19 山东万世机械科技有限公司 Concrete structure surface defect detection method based on image data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL2014772B1 (en) * 2015-05-06 2017-01-26 Univ Erasmus Med Ct Rotterdam A lumbar navigation method, a lumbar navigation system and a computer program product.
CN105550709B (en) * 2015-12-14 2019-01-29 武汉大学 A kind of remote sensing image power transmission line corridor wood land extracting method
WO2017177259A1 (en) * 2016-04-12 2017-10-19 Phi Technologies Pty Ltd System and method for processing photographic images
CN106878674B (en) * 2017-01-10 2019-08-30 哈尔滨工业大学深圳研究生院 A kind of parking detection method and device based on monitor video
CN107704865A (en) * 2017-05-09 2018-02-16 北京航空航天大学 Fleet Targets Detection based on the extraction of structure forest edge candidate region
CN109254654B (en) * 2018-08-20 2022-02-01 杭州电子科技大学 Driving fatigue feature extraction method combining PCA and PCANet

Also Published As

Publication number Publication date
CN110263635A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110263635B (en) Marker detection and identification method based on structural forest and PCANet
CN109271991B (en) License plate detection method based on deep learning
CN107798335B (en) Vehicle logo identification method fusing sliding window and Faster R-CNN convolutional neural network
CN109657632B (en) Lane line detection and identification method
CN105373794B (en) A kind of licence plate recognition method
CN101334836B (en) License plate positioning method incorporating color, size and texture characteristic
CN108009518A (en) A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN110866430B (en) License plate recognition method and device
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN104899554A (en) Vehicle ranging method based on monocular vision
CN108960055B (en) Lane line detection method based on local line segment mode characteristics
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN103902985B (en) High-robustness real-time lane detection algorithm based on ROI
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN107092876A (en) The low-light (level) model recognizing method combined based on Retinex with S SIFT features
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm
CN109886168B (en) Ground traffic sign identification method based on hierarchy
CN106407951A (en) Monocular vision-based nighttime front vehicle detection method
CN110516666B (en) License plate positioning method based on combination of MSER and ISODATA
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
CN109086671B (en) Night lane marking line video detection method suitable for unmanned driving
CN113033363A (en) Vehicle dense target detection method based on deep learning
Xuan et al. Robust lane-mark extraction for autonomous driving under complex real conditions
CN112699841A (en) Traffic sign detection and identification method based on driving video
CN111428538B (en) Lane line extraction method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant