CN108052904A - The acquisition methods and device of lane line - Google Patents
The acquisition methods and device of lane line Download PDFInfo
- Publication number
- CN108052904A CN108052904A CN201711332712.3A CN201711332712A CN108052904A CN 108052904 A CN108052904 A CN 108052904A CN 201711332712 A CN201711332712 A CN 201711332712A CN 108052904 A CN108052904 A CN 108052904A
- Authority
- CN
- China
- Prior art keywords
- lane line
- probability
- pixel
- color
- obtains
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of acquisition methods and device of lane line, and this method includes:Inverse perspective mapping, the area-of-interest figure after being converted are carried out to the area-of-interest figure in road to be detected;Using YCbCrColor space model determines that each pixel belongs to the color probability of lane line in the area-of-interest figure after conversion;And the color probability for belonging to lane line to each pixel is normalized, and obtains the gray probability that each pixel belongs to lane line;The gray probability for belonging to lane line according to each pixel obtains lane line gray probability figure;Region segmentation processing is carried out to lane line gray probability illustraton of model using clustering algorithm, obtains binary segmentation result figure;Binary segmentation result figure is handled, and obtains the lane line in road to be detected.The acquisition methods and device of lane line provided by the invention improve the accuracy of detection of lane line.
Description
Technical field
The present invention relates to technical field of intelligent traffic more particularly to the acquisition methods and device of a kind of lane line.
Background technology
Automobile assistant driving system is commonly used to the faulty operation and the fatigue driving that avoid driver.When vehicle deviates correctly
Track or when will collide reminds driver or automatically controls vehicle to safe condition, and then the safety of raising driver
Property.In automobile assistant driving system, the precision and robustness of lane detection are very important.
In order to improve the accuracy of detection of lane line, in the prior art, be normally based on lane line color characteristic method and
Method based on lane line edge grey value characteristics is detected, still, for the method based on lane line color characteristic,
It requires lane line that must have striking contrast color, and distinct pair is not necessarily had between each color of existing lane line
Than the precision using this method detection lane line is not high.In addition, for the method based on lane line edge grey value characteristics and
Speech, if on the more road of safety label, similarity is higher between lane line edge grey value characteristics, is examined using this method
The precision of measuring car diatom is not also high.
Therefore, using existing method for detecting lane lines so that the precision of lane detection is not high.
The content of the invention
The present invention provides a kind of acquisition methods and device of lane line, to improve the accuracy of detection of lane line.
The embodiment of the present invention provides a kind of acquisition methods of lane line, including:
Inverse perspective mapping, the area-of-interest figure after being converted are carried out to the area-of-interest figure in road to be detected;
Using YCbCrColor space model determines that each pixel belongs to track in the area-of-interest figure after the conversion
The color probability of line;
And the color probability for belonging to lane line to each described pixel is normalized, and obtains each described picture
Element belongs to the gray probability of lane line;
The gray probability for belonging to lane line according to each described pixel obtains the lane line gray probability figure;
Region segmentation processing is carried out to the lane line gray probability illustraton of model using clustering algorithm, obtains binary segmentation knot
Fruit is schemed;
The binary segmentation result figure is handled, and obtains the lane line in the road to be detected.
In an embodiment of the present invention, it is described to use YCbCrColor space model determines the area-of-interest after the conversion
Each pixel belongs to the color probability of lane line in figure, including:
According toDetermine each pixel in the area-of-interest figure after the conversion
Belong to the color probability of lane line;
Wherein, P (Ci|x) representing that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor
I class colors, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blue
Chroma color component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability,For apriority probability, #CiFor color CiSample number in class, K represent K kind colors.
In an embodiment of the present invention, the color probability for belonging to lane line to each described pixel is normalized
Processing obtains the gray probability that each described pixel belongs to lane line, including:
According toPlace is normalized in the color probability for belonging to lane line to each described pixel
Reason obtains the gray probability that each described pixel belongs to lane line;
Wherein,Represent that pixel belongs to the gray probability of lane line.
In an embodiment of the present invention, it is described that the binary segmentation result figure is handled, and obtain described to be detected
Lane line in road, including:
Morphological scale-space is carried out to the binary segmentation result figure, the binary segmentation result figure that obtains that treated;
Treated that binary segmentation result figure is handled to described, and obtains the lane line in the road to be detected.
In an embodiment of the present invention, described treated that binary segmentation result figure is handled to described, and obtains institute
The lane line in road to be detected is stated, including:
Using the boundary curve in Sobel Sobel algorithms detection treated the binary segmentation result figure;
Center line operation processing is carried out to the boundary curve, the center curve that obtains that treated;
Segment processing is carried out to the center curve using Hall Hough transform, and it is multiple to what is obtained after segment processing
Straightway is fitted processing, obtains the lane line in the road to be detected.
In an embodiment of the present invention, it is described using Sobel Sobel algorithms detection treated the binary segmentation knot
Boundary curve in fruit figure, including:
Using the Sobel algorithms obtain in treated the binary segmentation result figure the horizontal value of each pixel and
Longitudinal direction value;
Each pixel gradient value and direction according to determining the horizontal value of each pixel and longitudinal direction value;
When the Grad of each pixel and direction are more than predetermined threshold value, and Grad is in the range of certain predetermined
When, determine marginal point of each the described pixel for treated the binary segmentation result figure;
Boundary curve in treated according to each endpoint detections binary segmentation result figure.
In an embodiment of the present invention, after the lane line obtained in the road to be detected, including:
The lane line in the road to be detected is shown on map.
The embodiment of the present invention also provides a kind of acquisition device of lane line, including:
Converter unit, for carrying out inverse perspective mapping to the area-of-interest figure in road to be detected, after being converted
Area-of-interest figure;
Determination unit, for using YCbCrColor space model determines each in the area-of-interest figure after the conversion
Pixel belongs to the color probability of lane line;
Processing unit, the color probability for belonging to lane line to each described pixel are normalized, obtain
Each described pixel belongs to the gray probability of lane line;
Generation unit, the gray probability for belonging to lane line according to each described pixel obtain the lane line gray scale
Probability graph;
Cutting unit, for carrying out region segmentation processing to the lane line gray probability illustraton of model using clustering algorithm,
Obtain binary segmentation result figure;
Acquiring unit for handling the binary segmentation result figure, and obtains the vehicle in the road to be detected
Diatom.
In an embodiment of the present invention, the determination unit, specifically for basis
Determine that each pixel in the area-of-interest figure after the conversion belongs to the color probability of lane line;
Wherein, P (Ci| x) represent that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor
I class colors, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blue
Chroma color component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability,For apriority probability, #CiFor color CiSample number in class, K represent K kind colors.
In an embodiment of the present invention, the processing unit, specifically for basisTo described every
The color probability that one pixel belongs to lane line is normalized, and obtains the gray scale that each described pixel belongs to lane line
Probability;
Wherein,Represent that pixel belongs to the gray probability of lane line.
In an embodiment of the present invention, the acquiring unit, specifically for carrying out form to the binary segmentation result figure
Handle, the binary segmentation result figure that obtains that treated;And treated that binary segmentation result figure is handled to described, and obtain
Take the lane line in the road to be detected.
In an embodiment of the present invention, the acquiring unit, specifically for detecting the place using Sobel Sobel algorithms
The boundary curve in binary segmentation result figure after reason;Center line operation processing is carried out to the boundary curve, after obtaining processing
Center curve;Segment processing is carried out to the center curve using Hall Hough transform, and it is more to what is obtained after segment processing
A straightway is fitted processing, obtains the lane line in the road to be detected.
In an embodiment of the present invention, the acquiring unit, specifically for obtaining the processing using the Sobel algorithms
The horizontal value of each pixel and longitudinal direction value in binary segmentation result figure afterwards;According to the horizontal value of each pixel and indulge
Each described pixel gradient value and direction are determined to value;When the Grad of each pixel and direction are more than default threshold
Value, and when Grad is in the range of certain predetermined determines each described pixel for treated the binary segmentation result figure
Marginal point;Boundary curve in treated according to each endpoint detections binary segmentation result figure.
In an embodiment of the present invention, the acquisition device of the lane line further includes display unit;
The display unit, for showing the lane line in the road to be detected on map.
An embodiment of the present invention provides a kind of acquisition methods of lane line, first to the area-of-interest figure in road to be detected
Carry out inverse perspective mapping, the area-of-interest figure after being converted;Using YCbCrColor space model determines that the sense after conversion is emerging
Each pixel belongs to the color probability of lane line in interesting administrative division map;And each pixel is belonged to the color probability of lane line into
Row normalized obtains the gray probability that each pixel belongs to lane line;Belong to the ash of lane line according to each pixel
Degree probability obtains lane line gray probability figure;Lane line gray probability illustraton of model is carried out at region segmentation using clustering algorithm
Reason, obtains binary segmentation result figure;Binary segmentation result figure is handled, and obtains the lane line in road to be detected, from
And improve the precision of lane detection.
Description of the drawings
Fig. 1 is a kind of schematic diagram of the acquisition methods of lane line provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of camera position provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of the size dimension of the two field picture of shooting provided in an embodiment of the present invention;
Fig. 4 is emerging for the sense before a kind of area-of-interest figure progress inverse perspective mapping provided in an embodiment of the present invention and after conversion
The schematic diagram of interesting administrative division map;
Fig. 5 is a kind of schematic diagram for knowing segmentation result figure provided in an embodiment of the present invention;
Fig. 6 is a kind of flow diagram handled binary segmentation result figure provided in an embodiment of the present invention;
Fig. 7 be a kind of Morphological scale-space provided in an embodiment of the present invention after binary segmentation result figure schematic diagram;
Fig. 8 is a kind of schematic diagram of treated center curve provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of the both sides lane line obtained after a kind of tracking provided in an embodiment of the present invention;
Figure 10 is the structure diagram for the device that a kind of lane line provided in an embodiment of the present invention obtains;
Figure 11 is the structure diagram for the device that another lane line provided in an embodiment of the present invention obtains.
Specific embodiment
Here exemplary embodiment will be illustrated, example is illustrated in the accompanying drawings.In the following description when referring to the accompanying drawings,
Unless otherwise indicated, the same numbers in different attached drawings represent the same or similar element.It is retouched in following exemplary embodiment
The embodiment stated does not represent all embodiments consistent with the disclosure.
Term " first ", " second ", " the 3rd " " in description and claims of this specification and above-mentioned attached drawing
The (if present)s such as four " are the objects for distinguishing similar, without being used to describe specific order or precedence.It should manage
The data that solution so uses can exchange in the appropriate case, so that the embodiment of the present invention described herein for example can be to remove
Order beyond those for illustrating or describing herein is implemented.In addition, term " comprising " and " having " and theirs is any
Deformation, it is intended that cover it is non-exclusive include, for example, containing the process of series of steps or unit, method, system, production
Product or equipment are not necessarily limited to those steps or unit clearly listed, but may include not list clearly or for this
The intrinsic other steps of processes, method, product or equipment or unit a bit.
In the prior art, it is normally based on the method for lane line color characteristic and based on lane line edge grey value characteristics
Method is detected, still, for the method based on lane line color characteristic, it is required that lane line must have distinctness
Contrastive colours, and striking contrast is not necessarily had between each color of existing lane line, using this method detection lane line
Precision is not high.In addition, for the method based on lane line edge grey value characteristics, if in the more road of safety label
On, similarity is higher between lane line edge grey value characteristics, and the precision using this method detection lane line is not also high, so as to lead
The precision for causing lane detection is not high.In order to improve the precision of lane detection, an embodiment of the present invention provides a kind of lane lines
Acquisition methods, first in road to be detected area-of-interest figure carry out inverse perspective mapping, the region of interest after being converted
Domain figure;Using YCbCrColor space model determines that each pixel belongs to the color of lane line in the area-of-interest figure after conversion
Probability;And the color probability for belonging to lane line to each pixel is normalized, and obtains each pixel and belongs to track
The gray probability of line;The gray probability for belonging to lane line according to each pixel obtains lane line gray probability figure;Using cluster
Algorithm carries out region segmentation processing to lane line gray probability illustraton of model, obtains binary segmentation result figure;To binary segmentation result
Figure is handled, and obtains the lane line in road to be detected, so as to improve the precision of lane detection.
Technical scheme is illustrated with specific embodiment below.These specific embodiments can below
To be combined with each other, repeated no more in certain embodiments for the same or similar concept or process.It is right below in conjunction with attached drawing
The embodiment of the present invention is described.
Fig. 1 be a kind of schematic diagram of the acquisition methods of lane line provided in an embodiment of the present invention, the acquisition side of the lane line
Method can be independently arranged by the acquisition device of lane line, the acquisition device of the lane line, can also be integrated in other equipment
In, shown in Figure 1, the acquisition methods of the lane line can include:
S101, inverse perspective mapping, the region of interest after being converted are carried out to the area-of-interest figure in road to be detected
Domain figure.
After the original image of road to be detected is determined, the day dead zone in the area-of-interest in original image can be removed
Domain, to obtain area-of-interest figure;Inverse perspective mapping is carried out to area-of-interest again, it is exemplary, when carrying out inverse perspective mapping,
Lane line can be converted into birds-eye view from camera view by using camera parameters, the lane line after inverse perspective mapping is
It is parallel and with identical width, then track is detected using wave filter or geometrical constraint.It can be with using inverse perspective mapping
Mileage chart in image coordinate system is transformed in world coordinate system.It refers to shown in Fig. 2 and Fig. 3, Fig. 2 is the embodiment of the present invention
A kind of schematic diagram of the camera position provided, Fig. 3 are a kind of size ruler of the two field picture of shooting provided in an embodiment of the present invention
Very little schematic diagram.The coordinate in area-of-interest figure after inverse perspective mapping for x, y, z, γ and θ be video camera yaw angle and
Pitch angle, camera angles scope are 2 α.Video camera with respect to ground level position for d, h, l, the coordinate in initial two field picture is
U, v, the size of initial two field picture is Rx,Ry。
Wherein, the model of inverse perspective mapping can be:
Z=0
After area-of-interest figure carries out inverse perspective mapping, the area-of-interest figure after inverse perspective mapping can be generated,
Shown in Figure 4, Fig. 4 is before a kind of area-of-interest figure progress inverse perspective mapping provided in an embodiment of the present invention and after conversion
Area-of-interest figure schematic diagram.
S102, using YCbCrColor space model determines that each pixel belongs to track in the area-of-interest figure after conversion
The color probability of line.
After the area-of-interest figure after being converted by S101, it is possible to using YCbCrColor space model determines
Each pixel belongs to the color probability of lane line in area-of-interest figure after inverse perspective mapping, due in different illumination items
Under part, YCbCrThere is better performance in space than rgb space, and the colour of camera acquisition can be reduced based on human vision property
Therefore area-of-interest figure after inverse perspective mapping, first can be converted to YC by the memory capacity of image by RGB colorbCr
Space, wherein Y are luminance components, CbIt is chroma blue component, CrIt is red chrominance component, so as to pass through the YCbCrColor space
Model determines that each pixel belongs to the color probability of lane line in the area-of-interest figure after conversion.
Specifically, it can pass throughRGB color is converted to YCbCrSpace.
Optionally, in embodiments of the present invention, S102 uses YCbCrColor space model determines the region of interest after conversion
Each pixel belongs to the color probability of lane line in the figure of domain, including:
According toDetermine each pixel category in the area-of-interest figure after conversion
In the color probability of lane line.
Wherein, P (Ci| x) represent that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor
I class colors, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blue
Chroma color component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability, For apriority probability, wherein #CiFor color Ci
Sample number in class, K represent K kind colors.
S103, the color probability for belonging to lane line to each pixel are normalized, and obtain each pixel category
In the gray probability of lane line.
After determining that each pixel belongs to the color probability of lane line in the area-of-interest figure after conversion by S102,
The color probability that lane line can be belonged to each pixel is normalized, and obtains each pixel and belongs to lane line
Gray probability.Optionally, S103 belongs to each pixel the color probability of lane line and is normalized, and obtains each
A pixel belongs to the gray probability of lane line, including:
According toThe color probability for belonging to lane line to each pixel is normalized,
Obtain the gray probability that each pixel belongs to lane line;
Wherein,Represent that pixel belongs to the gray probability of lane line.
S104, the gray probability for belonging to lane line according to each pixel obtain lane line gray probability figure.
It is normalized in the color probability for belonging to lane line to each pixel by S103, obtains each picture
Element belongs to after the gray probability of lane line, it is possible to which the gray probability for belonging to lane line according to each pixel obtains lane line
Gray probability figure, in the lane line gray probability figure, there is higher density in lane line region, and background area has relatively low close
Degree, but may wherein include substantial amounts of wrong report information.Therefore, it is possible to use medium filtering eliminates the sharp pulse noise of image, dash forward
Go out the edge and details of image.
It should be noted that in embodiments of the present invention, by S101 to the area-of-interest figure in road to be detected
Inverse perspective mapping is carried out, it, can also be directly using RGB color model to change after the area-of-interest figure after convert
Area-of-interest figure after changing is handled, the lane line color probability figure that obtains that treated, then using clustering algorithm to track
Line color probabilistic model figure carries out region segmentation processing, obtains binary segmentation result figure.
S105, region segmentation processing is carried out to lane line gray probability illustraton of model using clustering algorithm, obtains binary segmentation
Result figure.
Exemplary, which can be partial information Fuzzy C-Means Clustering Algorithm, can also be gathered using K-means
Class algorithm, it is of course also possible to use other clustering algorithms.Exemplary, in embodiments of the present invention, clustering algorithm is partial information
Fuzzy C-Means Clustering Algorithm.It is to determine that each point belongs to certain with the degree of membership of point in local message Fuzzy C-Means Clustering Algorithm
A kind of clustering algorithm of the degree of a cluster.Specific algorithm is as follows:
(1) first basisCalculate the center of K classes;
(2) basisIt determines to minimize object function;Wherein GkiIt is blur factor,dijFor the space Euclidean distance between pixel i and j;
(3) further according toUpdate gray value xjCompared with the fuzzy membership of kth class, iteration
Previous step is until max | V(b)-V(b+1)| < ε, wherein V(b)For fuzzy partition matrix, b is cycle-index, so as to obtain lane line
Binary segmentation result figure.Shown in Figure 5, Fig. 5 a kind of knows showing for segmentation result figure to be provided in an embodiment of the present invention
It is intended to.
S106, binary segmentation result figure is handled, and obtains the lane line in road to be detected.
After binary segmentation result figure is obtained by S105, it is possible to the binary segmentation result figure be handled, and obtained
Take the lane line in road to be detected.It can be seen that in the lane line in obtaining road to be detected, clustering algorithm pair is being used
Before lane line gray probability illustraton of model carries out region segmentation processing, first using YCbCrColor space model determines the sense after conversion
Each pixel belongs to the color probability of lane line in interest administrative division map, each pixel is belonged to the color probability of lane line into
Row normalized obtains the gray probability that each pixel belongs to lane line, and the ash of lane line is belonged to according to each pixel
Degree probability obtains lane line gray probability figure and then carries out region segmentation to the lane line gray probability figure of generation, so as to carry
The high precision of lane detection.
An embodiment of the present invention provides a kind of acquisition methods of lane line, first to the area-of-interest figure in road to be detected
Carry out inverse perspective mapping, the area-of-interest figure after being converted;Using YCbCrColor space model determines that the sense after conversion is emerging
Each pixel belongs to the color probability of lane line in interesting administrative division map;And each pixel is belonged to the color probability of lane line into
Row normalized obtains the gray probability that each pixel belongs to lane line;Belong to the ash of lane line according to each pixel
Degree probability obtains lane line gray probability figure;Lane line gray probability illustraton of model is carried out at region segmentation using clustering algorithm
Reason, obtains binary segmentation result figure;Binary segmentation result figure is handled, and obtains the lane line in road to be detected, from
And improve the precision of lane detection.
Based on embodiment shown in FIG. 1, after binary segmentation result figure is obtained, at binary segmentation result figure
Reason, and the lane line in road to be detected is obtained, it can be accomplished by the following way, shown in Figure 6, Fig. 6 is the present invention
A kind of flow diagram handled binary segmentation result figure that embodiment provides.
S601, Morphological scale-space is carried out to binary segmentation result figure, the binary segmentation result figure that obtains that treated.
It is exemplary, in embodiments of the present invention, by carrying out Morphological scale-space to binary segmentation result figure, it is therefore intended that can
Cancelled and removal noise so that span to be less than to the summit of rectangular configuration so that the lane line in treated binary segmentation result figure
Region is more complete.Shown in Figure 7, Fig. 7 is the binary segmentation after a kind of Morphological scale-space provided in an embodiment of the present invention
The schematic diagram of result figure.
S602, using the boundary curve in the binary segmentation result figure after Sobel Sobel algorithm detection process.
Optionally, in embodiments of the present invention, S602 is using the binary segmentation knot after Sobel Sobel algorithm detection process
Boundary curve in fruit figure can include:
The horizontal value of each pixel and longitudinal direction value in treated binary segmentation result figure are obtained using Sobel algorithms;
Each pixel gradient value and direction are determined according to the horizontal value of each pixel and longitudinal direction value;When the Grad of each pixel
It is more than predetermined threshold value with direction, and when Grad is in the range of certain predetermined, determines each pixel for treated two-value point
Cut the marginal point of result figure;According to the boundary curve in each endpoint detections treated binary segmentation result figure.
Specifically, Sobel algorithms can position edge exactly in soft image, horizontal and vertical approximation is asked
Image after leading is:
Gradient magnitude and direction are:
When the gradient of certain pixel is more than a certain threshold value, and direction is within the specific limits, sets the pixel as lane line area
The marginal point in domain;After each marginal point is determined, it is possible to according to each endpoint detections treated binary segmentation
Boundary curve in result figure.Shown in Figure 7, Fig. 7 is provided in an embodiment of the present invention a kind of using Sobel Sobel calculations
The schematic diagram of the boundary curve in binary segmentation result figure after method detection process.
S603, center line operation processing is carried out to boundary curve, the center curve that obtains that treated.
The center line in lane line region can be the average value of two edge lines in lane line region, shown in Figure 8, scheme
8 be a kind of schematic diagram of treated center curve provided in an embodiment of the present invention.
S604, segment processing is carried out to center curve using Hall Hough transform, and it is multiple to what is obtained after segment processing
Straightway is fitted processing, obtains the lane line in road to be detected.
After the center curve that obtains that treated by S603, it is possible to using Hough transform in image base region
Left and right two parts search for straight line respectively.Hough transform is voted in the pole coordinate parameter space of straight line, the local peaks of accumulation
Value point is candidate's straight line.Wherein, candidate's straight line meets following condition:
(1) straight line and the angle of vertical direction must be in 25 °;(2) the bottom intersection point of straight line and the bottom centre of image
The distance between point is no more than the width in a track in image;(3) two lane lines must are fulfilled for parallel condition;(4) when
Have a plurality of straight line it is eligible when, the straight line closest to bottom centre's point is selected in the left and right sides respectively as lane line.
Most important straight line in image base region or so two parts can be determined according to aforementioned four condition.In present frame
In, the regular length line segment of the straight line bottom detected chooses its initial segment as both sides lane line;Then along initial segment
Direction tracks lane line;Entire lane line can be approximately that many straightways join end to end;The angle of the track straightway of continuous one side
Degree can be used for the interrupted lane line of approximate tracking opposite side;When having shade on lane line interruption or road, trace routine also may be used
Complete present frame lane line is gone out with lasting tracing detection.Shown in Figure 9, Fig. 9 is one kind provided in an embodiment of the present invention
The schematic diagram of the both sides lane line obtained after tracking.After Fig. 9 is obtained, it can be fitted to using least square method secondary more
Item formula, fit curve equation is obtained by solving the coefficient matrix of polynomial fitting.Fit procedure passes through quadratic polynomial mould
Approximate tracking result is changed into high-precision matched curve by type, improves the reliability of detection.
Optionally, in embodiments of the present invention, after obtaining the lane line in road to be detected, can also include:
The lane line in road to be detected is shown on map.
The track line slope per frame is tracked using Kalman filter, prediction can be measured by original state and currently
State afterwards, it is more accurate than independent measurement to predict, then corrects frame by frame and then draws out accurately global map, so as in map
Lane line in upper display road to be detected.
In order to verify the acquisition methods of lane line provided in an embodiment of the present invention, in the following, using the related data in table 1 as
Example, 300 frames chosen in sequence of video images of the vehicle on urban road are analyzed, the road quality in these images compared with
Difference including the varying environments such as rainy day and night, further includes a variety of bend situations.Wherein, the size per two field picture is
320*280, the time of the average detection lane line per two field picture is 22 milliseconds, average 45.5 frame per second.When car speed is
During 100km/h, i.e. vehicle traveling 27.8m per second, system is updated every 0.61m into driveway line.Testing result is as shown in table 1.
Detection accuracy under 1 various varying environments of table
It can be seen that by the testing result of table 1:There are 295 two field pictures that can correctly detect lane line, accuracy 98.3%.
In the case where lane line has partial occlusion, hole, shadow condition, which still has good testing result, therefore, the embodiment of the present invention
The acquisition methods of the lane line of offer have good interference free performance.
Figure 10 is the structure diagram for the device 100 that a kind of lane line provided in an embodiment of the present invention obtains, and refers to figure
Shown in 10, the device 100 which obtains can include:
Converter unit 1001 for carrying out inverse perspective mapping to the area-of-interest figure in road to be detected, is converted
Area-of-interest figure afterwards.
Determination unit 1002, for using YCbCrColor space model determines each in the area-of-interest figure after conversion
Pixel belongs to the color probability of lane line.
Processing unit 1003, the color probability for belonging to lane line to each pixel are normalized, obtain
Each pixel belongs to the gray probability of lane line.
Generation unit 1004, the gray probability for belonging to lane line according to each pixel obtain lane line gray probability
Figure.
Cutting unit 1005, for carrying out region segmentation processing to lane line gray probability illustraton of model using clustering algorithm,
Obtain binary segmentation result figure.
Acquiring unit 1006 for handling binary segmentation result figure, and obtains the lane line in road to be detected.
Optionally, determination unit 1002, specifically for basisAfter determining conversion
Each pixel belongs to the color probability of lane line in area-of-interest figure;
Wherein, P (Ci| x) represent that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor
I class colors, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blue
Chroma color component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability,For apriority probability, #CiFor color CiSample number in class, K represent K kind colors.
Optionally, processing unit 1003, specifically for basisTrack is belonged to each pixel
The color probability of line is normalized, and obtains the gray probability that each pixel belongs to lane line;
Wherein,Represent that pixel belongs to the gray probability of lane line.
Optionally, acquiring unit 1006, specifically for carrying out Morphological scale-space to binary segmentation result figure, after obtaining processing
Binary segmentation result figure;And binary segmentation result figure is handled to treated, and obtain the track in road to be detected
Line.
Optionally, acquiring unit 1006, specifically for using the binary segmentation knot after Sobel Sobel algorithm detection process
Boundary curve in fruit figure;Center line operation processing is carried out to boundary curve, the center curve that obtains that treated;Using Hall
Hough transform carries out segment processing to center curve, and multiple straightways to being obtained after segment processing are fitted processing, obtain
Take the lane line in road to be detected.
Optionally, acquiring unit 1006, specifically for being obtained using Sobel algorithms in treated binary segmentation result figure
The horizontal value of each pixel and longitudinal direction value;Each pixel gradient value is determined according to the horizontal value of each pixel and longitudinal direction value
And direction;When the Grad of each pixel and direction are more than predetermined threshold value, and Grad is in the range of certain predetermined, determine
Each pixel for treated binary segmentation result figure marginal point;According to each endpoint detections treated two-value point
Cut the boundary curve in result figure.
Optionally, the acquisition device 100 of lane line can also include display unit 1007, shown in Figure 11, Figure 11
For the structure diagram for the device 100 that another lane line provided in an embodiment of the present invention obtains.
Display unit 1007, for showing the lane line in road to be detected on map.
The acquisition device 100 of above-mentioned lane line accordingly can perform the skill of the acquisition methods of the lane line of any embodiment
Art scheme, implementing principle and technical effect are similar, and details are not described herein.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice invention disclosed herein
Its embodiment.It is contemplated that cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.Description and embodiments are considered only as illustratively, and the true scope and spirit of the disclosure are by following
Claims are pointed out.
It should be appreciated that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claims
System.
Claims (10)
1. a kind of acquisition methods of lane line, which is characterized in that including:
Inverse perspective mapping, the area-of-interest figure after being converted are carried out to the area-of-interest figure in road to be detected;
Using YCbCrColor space model determines that each pixel belongs to lane line in the area-of-interest figure after the conversion
Color probability;
And the color probability for belonging to lane line to each described pixel is normalized, and obtains each described pixel category
In the gray probability of lane line;
The gray probability for belonging to lane line according to each described pixel obtains the lane line gray probability figure;
Region segmentation processing is carried out to the lane line gray probability illustraton of model using clustering algorithm, obtains binary segmentation result
Figure;
The binary segmentation result figure is handled, and obtains the lane line in the road to be detected.
2. according to the method described in claim 1, it is characterized in that, described use YCbCrColor space model determines the conversion
Each pixel belongs to the color probability of lane line in area-of-interest figure afterwards, including:
According toDetermine that each pixel belongs in the area-of-interest figure after the conversion
The color probability of lane line;
Wherein, P (Ci| x) represent that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor the i-th class
Color, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blueness
Chromatic component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability,For apriority probability, #CiFor color CiSample number in class, K represent K kind colors.
3. the according to the method described in claim 2, it is characterized in that, color for belonging to lane line to each described pixel
Probability is normalized, and obtains the gray probability that each described pixel belongs to lane line, including:
According toThe color probability for belonging to lane line to each described pixel is normalized, and obtains
Belong to the gray probability of lane line to each described pixel;
Wherein,Represent that pixel belongs to the gray probability of lane line.
4. according to claim 1-3 any one of them methods, which is characterized in that described that the binary segmentation result figure is carried out
Processing, and the lane line in the road to be detected is obtained, including:
Morphological scale-space is carried out to the binary segmentation result figure, the binary segmentation result figure that obtains that treated;
Treated that binary segmentation result figure is handled to described, and obtains the lane line in the road to be detected.
5. according to the method described in claim 4, it is characterized in that, described carry out treated the binary segmentation result figure
Processing, and the lane line in the road to be detected is obtained, including:
Using the boundary curve in Sobel Sobel algorithms detection treated the binary segmentation result figure;
Center line operation processing is carried out to the boundary curve, the center curve that obtains that treated;
Segment processing, and multiple straight lines to being obtained after segment processing carry out the center curve using Hall Hough transform
Section is fitted processing, obtains the lane line in the road to be detected.
6. according to the method described in claim 5, it is characterized in that, described detect the processing using Sobel Sobel algorithms
The boundary curve in binary segmentation result figure afterwards, including:
The horizontal value of each pixel and longitudinal direction in treated the binary segmentation result figure are obtained using the Sobel algorithms
Value;
Each pixel gradient value and direction according to determining the horizontal value of each pixel and longitudinal direction value;
When the Grad of each pixel and direction are more than predetermined threshold value, and Grad is in the range of certain predetermined, really
Each fixed described pixel for treated the binary segmentation result figure marginal point;
Boundary curve in treated according to each endpoint detections binary segmentation result figure.
7. according to claim 1-3 any one of them methods, which is characterized in that the vehicle obtained in the road to be detected
After diatom, including:
The lane line in the road to be detected is shown on map.
8. a kind of acquisition device of lane line, which is characterized in that including:
Converter unit, for carrying out inverse perspective mapping to the area-of-interest figure in road to be detected, the sense after being converted is emerging
Interesting administrative division map;
Determination unit, for using YCbCrColor space model determines each pixel in the area-of-interest figure after the conversion
Belong to the color probability of lane line;
Processing unit, the color probability for belonging to lane line to each described pixel are normalized, and obtain described
Each pixel belongs to the gray probability of lane line;
Generation unit, the gray probability for belonging to lane line according to each described pixel obtain the lane line gray probability
Figure;
Cutting unit for carrying out region segmentation processing to the lane line gray probability illustraton of model using clustering algorithm, obtains
Binary segmentation result figure;
Acquiring unit for handling the binary segmentation result figure, and obtains the lane line in the road to be detected.
9. device according to claim 8, which is characterized in that
The determination unit, specifically for basisIt determines interested after the conversion
Each pixel belongs to the color probability of lane line in administrative division map;
Wherein, P (Ci| x) represent that pixel belongs to the probability of lane line, N-1 represents that lane line shares N-1 class colors, CiFor the i-th class
Color, the color of each pixel can be expressed as:X=(Y, Cb,Cr), and x=(Y, Cb, Cr), Y is luminance component, CbIt is blueness
Chromatic component, CrIt is red chrominance component, P (x) is the color probability of pixel x, P (x | Ci) it is possibility probability,For apriority probability, #CiFor color CiSample number in class, K represent K kind colors.
10. device according to claim 9, which is characterized in that
The processing unit, specifically for basisBelong to the color of lane line to each described pixel
Probability is normalized, and obtains the gray probability that each described pixel belongs to lane line;
Wherein,Represent that pixel belongs to the gray probability of lane line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711332712.3A CN108052904B (en) | 2017-12-13 | 2017-12-13 | Method and device for acquiring lane line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711332712.3A CN108052904B (en) | 2017-12-13 | 2017-12-13 | Method and device for acquiring lane line |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108052904A true CN108052904A (en) | 2018-05-18 |
CN108052904B CN108052904B (en) | 2021-11-30 |
Family
ID=62132692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711332712.3A Active CN108052904B (en) | 2017-12-13 | 2017-12-13 | Method and device for acquiring lane line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108052904B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110595499A (en) * | 2019-09-26 | 2019-12-20 | 北京四维图新科技股份有限公司 | Lane change reminding method, device and system |
WO2020103892A1 (en) * | 2018-11-21 | 2020-05-28 | 北京市商汤科技开发有限公司 | Lane line detection method and apparatus, electronic device, and readable storage medium |
CN111209770A (en) * | 2018-11-21 | 2020-05-29 | 北京三星通信技术研究有限公司 | Lane line identification method and device |
CN111460866A (en) * | 2019-01-22 | 2020-07-28 | 北京市商汤科技开发有限公司 | Lane line detection and driving control method and device and electronic equipment |
CN112101163A (en) * | 2020-09-04 | 2020-12-18 | 淮阴工学院 | Lane line detection method |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN112418187A (en) * | 2020-12-15 | 2021-02-26 | 潍柴动力股份有限公司 | Lane line recognition method and apparatus, storage medium, and electronic device |
CN112633151A (en) * | 2020-12-22 | 2021-04-09 | 浙江大华技术股份有限公司 | Method, device, equipment and medium for determining zebra crossing in monitored image |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103488975A (en) * | 2013-09-17 | 2014-01-01 | 北京联合大学 | Zebra crossing real-time detection method based in intelligent driving |
CN104331873A (en) * | 2013-07-22 | 2015-02-04 | 浙江大学 | Method for detecting road from single image |
CN104598892A (en) * | 2015-01-30 | 2015-05-06 | 广东威创视讯科技股份有限公司 | Dangerous driving behavior alarming method and system |
CN105261020A (en) * | 2015-10-16 | 2016-01-20 | 桂林电子科技大学 | Method for detecting fast lane line |
CN105631880A (en) * | 2015-12-31 | 2016-06-01 | 百度在线网络技术(北京)有限公司 | Lane line segmentation method and apparatus |
CN105678285A (en) * | 2016-02-18 | 2016-06-15 | 北京大学深圳研究生院 | Adaptive road aerial view transformation method and road lane detection method |
CN105740796A (en) * | 2016-01-27 | 2016-07-06 | 大连楼兰科技股份有限公司 | Grey level histogram based post-perspective transformation lane line image binarization method |
CN106446864A (en) * | 2016-10-12 | 2017-02-22 | 成都快眼科技有限公司 | Method for detecting feasible road |
CN106558051A (en) * | 2015-09-25 | 2017-04-05 | 浙江大学 | A kind of improved method for detecting road from single image |
-
2017
- 2017-12-13 CN CN201711332712.3A patent/CN108052904B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104331873A (en) * | 2013-07-22 | 2015-02-04 | 浙江大学 | Method for detecting road from single image |
CN103488975A (en) * | 2013-09-17 | 2014-01-01 | 北京联合大学 | Zebra crossing real-time detection method based in intelligent driving |
CN104598892A (en) * | 2015-01-30 | 2015-05-06 | 广东威创视讯科技股份有限公司 | Dangerous driving behavior alarming method and system |
CN106558051A (en) * | 2015-09-25 | 2017-04-05 | 浙江大学 | A kind of improved method for detecting road from single image |
CN105261020A (en) * | 2015-10-16 | 2016-01-20 | 桂林电子科技大学 | Method for detecting fast lane line |
CN105631880A (en) * | 2015-12-31 | 2016-06-01 | 百度在线网络技术(北京)有限公司 | Lane line segmentation method and apparatus |
CN105740796A (en) * | 2016-01-27 | 2016-07-06 | 大连楼兰科技股份有限公司 | Grey level histogram based post-perspective transformation lane line image binarization method |
CN105678285A (en) * | 2016-02-18 | 2016-06-15 | 北京大学深圳研究生院 | Adaptive road aerial view transformation method and road lane detection method |
CN106446864A (en) * | 2016-10-12 | 2017-02-22 | 成都快眼科技有限公司 | Method for detecting feasible road |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020103892A1 (en) * | 2018-11-21 | 2020-05-28 | 北京市商汤科技开发有限公司 | Lane line detection method and apparatus, electronic device, and readable storage medium |
CN111209770A (en) * | 2018-11-21 | 2020-05-29 | 北京三星通信技术研究有限公司 | Lane line identification method and device |
CN111209770B (en) * | 2018-11-21 | 2024-04-23 | 北京三星通信技术研究有限公司 | Lane line identification method and device |
CN111460866B (en) * | 2019-01-22 | 2023-12-22 | 北京市商汤科技开发有限公司 | Lane line detection and driving control method and device and electronic equipment |
CN111460866A (en) * | 2019-01-22 | 2020-07-28 | 北京市商汤科技开发有限公司 | Lane line detection and driving control method and device and electronic equipment |
WO2020258894A1 (en) * | 2019-06-25 | 2020-12-30 | 北京市商汤科技开发有限公司 | Lane line property detection |
JP2021532449A (en) * | 2019-06-25 | 2021-11-25 | 北京市商▲湯▼科技▲開▼▲發▼有限公司Beijing Sensetime Technology Development Co., Ltd. | Lane attribute detection |
JP7119197B2 (en) | 2019-06-25 | 2022-08-16 | 北京市商▲湯▼科技▲開▼▲發▼有限公司 | Lane attribute detection |
CN112131914B (en) * | 2019-06-25 | 2022-10-21 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN112131914A (en) * | 2019-06-25 | 2020-12-25 | 北京市商汤科技开发有限公司 | Lane line attribute detection method and device, electronic equipment and intelligent equipment |
CN110595499A (en) * | 2019-09-26 | 2019-12-20 | 北京四维图新科技股份有限公司 | Lane change reminding method, device and system |
CN112101163A (en) * | 2020-09-04 | 2020-12-18 | 淮阴工学院 | Lane line detection method |
CN112418187A (en) * | 2020-12-15 | 2021-02-26 | 潍柴动力股份有限公司 | Lane line recognition method and apparatus, storage medium, and electronic device |
CN112633151A (en) * | 2020-12-22 | 2021-04-09 | 浙江大华技术股份有限公司 | Method, device, equipment and medium for determining zebra crossing in monitored image |
CN112633151B (en) * | 2020-12-22 | 2024-04-12 | 浙江大华技术股份有限公司 | Method, device, equipment and medium for determining zebra stripes in monitoring images |
Also Published As
Publication number | Publication date |
---|---|
CN108052904B (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108052904A (en) | The acquisition methods and device of lane line | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN107679520B (en) | Lane line visual detection method suitable for complex conditions | |
CN109242884B (en) | Remote sensing video target tracking method based on JCFNet network | |
CN104318258B (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN109064495B (en) | Bridge deck vehicle space-time information acquisition method based on fast R-CNN and video technology | |
Rasmussen | Combining laser range, color, and texture cues for autonomous road following | |
Kong et al. | Vanishing point detection for road detection | |
Kong et al. | General road detection from a single image | |
CN111860375A (en) | Plant protection unmanned aerial vehicle ground monitoring system and monitoring method thereof | |
CN110286124A (en) | Refractory brick measuring system based on machine vision | |
Ding et al. | An adaptive road ROI determination algorithm for lane detection | |
CN104217217B (en) | A kind of vehicle mark object detecting method and system based on two layers of classified | |
CN105261017A (en) | Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction | |
CN104766058A (en) | Method and device for obtaining lane line | |
CN111738314A (en) | Deep learning method of multi-modal image visibility detection model based on shallow fusion | |
CN106558051A (en) | A kind of improved method for detecting road from single image | |
Le et al. | Real time traffic sign detection using color and shape-based features | |
CN110021029B (en) | Real-time dynamic registration method and storage medium suitable for RGBD-SLAM | |
CN111046843A (en) | Monocular distance measurement method under intelligent driving environment | |
CN106407951A (en) | Monocular vision-based nighttime front vehicle detection method | |
Somawirata et al. | Road detection based on the color space and cluster connecting | |
CN109858310A (en) | Vehicles and Traffic Signs detection method | |
Poggenhans et al. | A universal approach to detect and classify road surface markings | |
CN105574892A (en) | Doppler-based segmentation and optical flow in radar images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |