WO2020164010A1 - 车道线检测方法、装置、系统与车辆、存储介质 - Google Patents

车道线检测方法、装置、系统与车辆、存储介质 Download PDF

Info

Publication number
WO2020164010A1
WO2020164010A1 PCT/CN2019/074962 CN2019074962W WO2020164010A1 WO 2020164010 A1 WO2020164010 A1 WO 2020164010A1 CN 2019074962 W CN2019074962 W CN 2019074962W WO 2020164010 A1 WO2020164010 A1 WO 2020164010A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
image data
observation
lane
data
Prior art date
Application number
PCT/CN2019/074962
Other languages
English (en)
French (fr)
Inventor
许睿
崔健
陈竞
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980005382.8A priority Critical patent/CN111316284A/zh
Priority to PCT/CN2019/074962 priority patent/WO2020164010A1/zh
Publication of WO2020164010A1 publication Critical patent/WO2020164010A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the embodiment of the present invention belongs to the field of intelligent transportation technology, and in particular relates to a lane line detection method, device, system, vehicle, and storage medium.
  • the more mature technology is based on a high-precision map constructed offline in advance.
  • IMU inertial measurement unit
  • lidar to collect map information of relevant road sections.
  • map information it is necessary to mark these map information offline to form a high-precision map, so that the vehicle is driving. You can load an offline high-precision map and locate it through point cloud registration. In this way, the lane line information of the area where the vehicle is located can be obtained.
  • the existing lane line detection schemes are limited to high-precision maps constructed offline. If the high-precision maps do not match the real-time environment, such as regional omissions or information lag, it will lead to greater safety in autonomous vehicles. sexual risk.
  • the embodiments of the present invention provide a lane line detection method, device, system, vehicle, and storage medium, which are used to implement lane line detection without using high-precision maps, so as to reduce the safety risk of autonomous vehicles.
  • an embodiment of the present invention provides a lane line detection method, including:
  • the historical observation result association is performed on the lane line observation result to obtain the lane line detection result.
  • an embodiment of the present invention provides a lane line detection device, including:
  • An acquisition module for acquiring image data including lane lines
  • a processing module configured to process the image data according to the priori information of the lane line to obtain the lane line observation result of the current frame of image data
  • the correlation module is used to correlate historical observation results of the lane line observation results to obtain the lane line detection results.
  • an embodiment of the present invention provides a lane line detection device, including:
  • the instruction is stored in the memory and is configured to be executed by the processor to implement the method according to the first aspect.
  • an embodiment of the present invention provides a lane line detection system, including:
  • the lane line detection device includes: a memory and a processor; the memory is used to store instructions, and the processor is used to execute the instructions and implement the method according to the first aspect;
  • Image acquisition device for acquiring initial image data and sending it to the processing device
  • the processing device is configured to perform visual recognition processing on the initial image data, and send the visual recognition processed image data to the lane line detection result device;
  • the pose sensor is used to collect vehicle pose data and send the pose data to the lane line detection result device.
  • an embodiment of the present invention provides a vehicle including the lane line detection device as described in the third aspect.
  • an embodiment of the present invention provides a vehicle, including: a body;
  • a power system connected to the vehicle body for driving the vehicle to move
  • a vehicle control system for controlling the vehicle
  • an embodiment of the present invention provides a computer-readable storage medium, characterized in that instructions are stored thereon, and the instructions are executed by a processor to implement the method described in the first aspect.
  • the lane line detection method, device, system, vehicle, and storage medium provided by the embodiments of the present invention start from the image data containing the lane line and process it in combination with prior information to obtain the lane line observation result of the current frame image data. Furthermore, through association of historical observation results, a complete lane line detection result associated with historical frame image data can be obtained. Because the image data can be collected in real time, as long as the image acquisition device can collect the initial image containing the lane line, the lane line detection can be realized by this scheme, and the obtained lane line detection result can match the real-time environment, and has a high Flexibility; In addition, compared with the solution of building offline high-precision maps, the requirements for image acquisition equipment are not high, and the hardware cost is relatively low. Therefore, the technical solutions provided by the embodiments of the present invention can realize lane line detection without resorting to high-precision maps, which reduces the safety risk of autonomous vehicles.
  • FIG. 1 is a schematic flowchart of a method for detecting lane lines according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another lane line detection method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of another lane line detection method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another method for detecting lane lines according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of another lane line detection method provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a lane line scene provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of another lane line detection method according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a lane line detection device provided by an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the physical structure of a lane line detection device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a lane line detection system provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a vehicle provided by an embodiment of the present invention.
  • Fig. 12 is a schematic structural diagram of another vehicle provided by an embodiment of the present invention.
  • the technical solution provided by the embodiment of the present invention is specifically applied to the observation scene of the lane line. Further, it can be applied to driving scenes of unmanned vehicles (or called autonomous vehicles). Or, it can be further applied to vehicle path planning scenarios.
  • the existing lane line observations are generally implemented based on pre-built high-precision maps.
  • the construction of high-precision maps requires high accuracy of hardware acquisition equipment.
  • high-precision IMUs and lidars are used to collect data and manually perform offline annotation to form high-precision maps.
  • the lane line observation based on the high-precision map needs to be coordinated with the positioning of the vehicle during driving. Through the point cloud registration positioning with the high-precision map, the lane line map information of the area where the vehicle is located can be matched to achieve Lane line observation.
  • the high-precision map constructed in advance still does not match the real-time environment.
  • the coverage of the high-precision map may not be comprehensive enough, that is, there is a lack of area, which makes it impossible to observe the lane line through the aforementioned method in the area where the high-precision map is not constructed in advance; on the other hand, the high-precision map may exist
  • the problem of synchronization lag for example, if the road section is under construction or the road structure is improved and modified, the high-precision map is not synchronized in time for this situation, which causes the information represented by the high-precision map to be inconsistent with the actual road environment.
  • the accuracy map cannot adapt to the current actual environment, resulting in low accuracy of the lane line observation results obtained from the high-precision map.
  • the construction of high-precision maps also requires large hardware costs (higher requirements for hardware accuracy lead to higher costs), labor costs (requires manual labeling) and time costs (time to measure data in advance and labeling Time etc.).
  • the existing lane line observation results are limited to high-precision maps. If it does not match the actual road conditions, the lane line cannot be observed or the accuracy of the observed lane line is low, and the lane line observation result is taken as An important basis in the driving process of unmanned vehicles, if it does not accurately adapt to the environment, it will lead to greater safety risks.
  • the lane line detection scheme provided by the present invention aims to solve the above technical problems of the prior art, and proposes the following solution idea: use the image acquisition equipment installed in the vehicle to perform image acquisition, and process the lane lines contained in the image, Obtain the lane line observation results of the current frame of image data, and merge the lane line observation results in time series to obtain the lane line map.
  • the embodiment of the present invention provides a lane line detection method. Please refer to Figure 1.
  • the method includes the following steps:
  • S102 Acquire image data including lane lines.
  • the image data is image data including lane lines.
  • the lane lines designed in the embodiment of the present invention include but are not limited to: lane boundary lines.
  • a lane boundary line refers to a lane line painted on the ground on one or both sides of a lane, and it can include a single dashed line, a single solid line, a double solid line, etc., as well as white and yellow lane boundary lines.
  • the image data can be acquired by any image acquisition device, wherein the image acquisition device can be specifically set in the vehicle.
  • the embodiment of the present invention has no special limitations on the attributes such as the acquisition accuracy of the image acquisition device.
  • it can be acquired by an image acquisition device such as a gray-scale camera, a color camera and the like that can collect road conditions.
  • the embodiment of the present invention does not need to spend additional hardware costs for implementation, and can be implemented only by using image acquisition devices commonly installed in current vehicles. Compared with the solution of using high-precision IMU and lidar to construct high-precision maps, Effectively reduce hardware costs.
  • the data collected by the image acquisition device may not include lane lines, and for image data that does not include lane lines, subsequent processing may not be performed.
  • S104 Process the image data according to the priori information of the lane line to obtain the lane line observation result of the current frame of image data.
  • the priori information of the lane line is used for verification to confirm what the lane line is, in other words, the characteristics of the lane line.
  • the embodiment of the present invention may be implemented based on the parallelism of lane lines.
  • the prior information may include, but is not limited to: the parallelism between lane lines.
  • the so-called parallelism refers to the degree to which multiple lane lines are parallel to each other.
  • parallelism can be measured by the direction angle between two lane lines. For example, between lane lines on a straight road section and between lane lines on a curved road section in normal driving can generally be considered to be parallel to each other.
  • the parallel between the lane lines does not mean that the lane lines are straight lines; on the aforementioned curved road section, if the points or local directions of the lane lines are consistent with each other, they can also be considered parallel to each other. of. Therefore, on the basis of the prior information, the solution provided by the embodiment of the present invention can be specifically applied to the lane line detection scenarios of most driving sections.
  • the prior information can be customized.
  • the a priori information of the lane line may also include but is not limited to at least one of the following: the width, length, and spacing of the lane line.
  • the prior information meets the pre-defined preset value range, it can be considered to meet the characteristics of the lane line.
  • the prior information processing can be customized as: the width of the lane line meets the preset width range, for example, the width of the lane line, that is, the width of the brushed line, can be 20 cm; the length of the lane line meets the preset length range, for example, For the dashed lane line, the length of each section of the lane line can be 3m; the distance between the lane lines meets the preset distance range, for example, for different roads, the width of the lane, that is, the distance between the lane lines can be 3-3.5 m.
  • the prior information requirement of the lane line can be set to satisfy at least one condition or satisfy all conditions. Such a priori information may not be affected by straight road sections or curved road sections, and is applicable in different application scenarios. Therefore, in a possible design, the prior information described in the foregoing two designs can be used together in the same scenario, and will not be repeated.
  • the processed line in the image data can be considered as the observed lane line, and the lane line observation result is obtained.
  • the current frame of image data it can include the lane line area that has been processed previously, such as visual recognition. After the prior information is verified, the eligible area is determined as the observed lane line area to obtain the current frame Lane line observation result of image data.
  • S106 Perform historical observation result association on the lane line observation result to obtain a lane line detection result.
  • the obtained lane line detection result includes at least one section of lane line.
  • the embodiment of the present invention starts from the image data containing lane lines and processes it in combination with prior information to obtain the lane line observation results of the current frame of image data, and further, through the historical observation result association, it can be Obtain complete lane line detection results associated with historical frame image data. Because the image data can be collected in real time, as long as the image acquisition device can collect the initial image containing the lane line, the lane line observation result can be realized through this solution, and the obtained lane line observation result can match the real-time environment, and has a high In addition, compared with the solution of building offline high-precision maps, the requirements for image acquisition equipment are not high, and the hardware cost is relatively low. Therefore, the technical solutions provided by the embodiments of the present invention can achieve lane line detection results without using a high-precision map, and reduce the safety risk of an autonomous vehicle.
  • step S102 can be implemented by receiving or acquiring image data including lane lines collected by the image collection device.
  • the image acquisition device may be set to acquire images under the control of the execution subject of the method (hereinafter, referred to as the lane line detection device for convenience of description), or may also be set to automatically acquire images.
  • the image acquisition device can also work in a custom or preset mode.
  • the embodiment of the present invention is not particularly limited.
  • the image acquisition device can continuously acquire and output images in real time, or the image acquisition device can be intermittently. Work and output images (when applied to real-time route planning or real-time driving, this method needs to set a small time interval, which does not affect the driving of the vehicle).
  • the aforementioned image data involved in the embodiment of the present invention is obtained after visual recognition processing. That is, after the aforementioned image acquisition device collects the initial image data, it sends the initial image data to the processing device, and the processing device performs visual recognition processing on the initial image data, and then sends the visual recognition processed image data to the lane Line detection result device. Specifically, the processing device may input the initial image data into the CNN neural network for calculation, and identify the area belonging to the lane line in the image.
  • the embodiment of the present invention has no particular limitation on the visual recognition model executed by the processing device, and it is mainly used to extract lane line features from initial image data containing lane lines to obtain image data containing lane line features.
  • the processing device can be designed separately; it can also be integrated with the lane line detection result device.
  • the lane line detection result device can include two processors, one for performing visual recognition on the initial image data Processing, another processor is used to execute the lane line detection result method shown in FIG. 1 in this design; or, for another example, only one processor is provided in the lane line detection result device, and the processor executes the foregoing solution.
  • the processing device can also be integrated into any other equipment or device, such as the aforementioned image processing device, which will not be repeated.
  • FIG. 2 shows the specific implementation of step S104 in the scene of realizing the lane line observation result of the current frame of image data. At this time, the following steps are included:
  • S1042 Perform skeleton extraction on the image data according to the prior information to obtain the skeleton data of the lane line.
  • the aforementioned image data is collected by an image acquisition device, it can be specifically expressed as pixels, and this step is to extract the skeleton data of the lane line from these pixels, that is, the pixels that may constitute the lane line data.
  • all pixel data of the image data can be filtered according to the prior information to extract all the pixels satisfying the aforementioned prior information as the skeleton data. This step can effectively reduce the amount of data and improve the processing efficiency of subsequent processing steps.
  • the image data is a top view image.
  • the image data is images from other perspectives. At this time, it is also necessary to perform perspective conversion on the image data in advance before performing the extraction of the skeleton data to obtain the bird view of the image data under the bird view, and thus, according to the prior information, Perform skeleton extraction on the top view data to obtain the skeleton data.
  • the method of skeleton extraction is the same as before and will not be repeated.
  • the conversion of the viewing angle to the top view angle can be achieved by projection.
  • S1044 Filter the skeleton data to obtain characteristic data of the lane line.
  • the skeleton data obtained in the foregoing steps cannot directly obtain the lane line, it needs to be further filtered to obtain the characteristic data of the lane line that meets the prior information, and then the characteristic data can be fitted to obtain the current frame image Data lane line observation results.
  • this step can be implemented by at least one of the following implementations: feature extraction and cluster analysis.
  • feature extraction is used to extract feature data that meets the prior information from the skeleton data; its implementation may include, but is not limited to: at least one of feature line extraction and fitting analysis extraction.
  • the feature line extraction method may include, but is not limited to: Hough transform.
  • the cluster analysis is used to cluster the skeleton data to obtain characteristic data that meets the prior information.
  • dual feature extraction methods can be adopted to achieve the extraction of lane line feature data.
  • the dual feature extraction method there is no special limitation on the execution timing of the two implementation methods, and they can be executed simultaneously and take the intersection; or they can be executed sequentially, so that the second feature is performed in the candidate feature data obtained by the first feature extraction. Re-feature extraction, and then get the final lane line feature data.
  • S1046 Perform line fitting on the feature data to obtain the lane line observation result of the current frame of image data.
  • the feature data obtained through the foregoing steps is the pixel point data that can constitute the lane line. Therefore, it is only necessary to perform line fitting on these pixel point data to obtain the lane line line.
  • FIG. 3 shows a more specific processing flow for acquiring the lane line observation result of the current frame of image data.
  • S104 may also specifically include the following steps:
  • S10422 Perform viewing angle conversion on the image data to obtain overhead view data of the image data.
  • S10424 Perform skeleton extraction on the top view data according to the prior information to obtain the skeleton data.
  • S10442 Perform Hough transform on the skeleton data to extract candidate feature data in the skeleton data.
  • S10444 Perform cluster analysis on the candidate feature data according to the prior information to obtain the feature data of the lane line.
  • S1046 Perform line fitting on the feature data to obtain the lane line observation result of the current frame of image data.
  • the lane line observation result of the current frame image data can be obtained from the current frame image data image.
  • the image data can be continuous or intermittent multi-frame data, then the aforementioned processing (as the current frame image data) is performed for each frame of image data, and the lane lines corresponding to each frame can be obtained. Observation results. After that, the lane line observation results of each current frame of image data are correlated in time series to obtain the lane line observation results.
  • S105 Perform coordinate conversion on the lane line observation result of the current frame of image data, so that the converted lane line observation result is in the world coordinate system.
  • the coordinate conversion is to convert the observation result of the lane line from the local vehicle body coordinate system to the world coordinate system. In short, this step is equivalent to the conversion of the Local to Global coordinate system.
  • S105 includes the following steps:
  • the vehicle's pose information includes: position information and attitude information, where the position information can be expressed as coordinates, and the attitude information can be expressed but not limited to: pitch, yaw, roll angle (roll).
  • the pose information involved in the embodiment of the present invention can be obtained by a pose sensor.
  • the pose sensor involved in the embodiment of the present invention may include but is not limited to at least one of the following: an inertial measurement unit IMU and a visual odometer.
  • the foregoing acquisition step can be implemented by directly receiving or actively acquiring the data collected by the IMU.
  • the specific working mode of the IMU is not described in detail in the embodiment of the present invention.
  • the embodiment of the present invention does not specifically limit the detection accuracy of the IMU and other attributes. Therefore, detection can be achieved without using a special high-cost IMU, which can effectively reduce hardware costs.
  • the realization of the visual odometer may include but is not limited to: Visual-Inertial Integration System (VINS).
  • VINS Visual-Inertial Integration System
  • the pose information acquired in the foregoing steps is real-time vehicle pose information, which is conducive to real-time lane line observation results.
  • the pose information may also be non-real-time.
  • the method of performing coordinate conversion on the lane line observation result may be:
  • the bird view after the angle of view conversion of the lane line observation result has been obtained, so the bird view can be converted to the camera coordinate system; because the installation position of the camera and IMU on the vehicle can be considered It is fixed, so there is a certain positional relationship between the two, so that the lane line observation result can be converted to the IMU coordinate system again; finally, the lane line observation result can be converted to the world by combining the pose information of the vehicle Under the coordinate system, the subsequent unified processing can be realized.
  • the coordinate conversion from the local car body coordinate system to the world coordinate system can be realized.
  • the subsequent historical observation result association processing is performed, the lane line observation results of each single frame can be in the same coordinate system.
  • Facilitate subsequent processing is performed.
  • the embodiment of the present invention provides a processing method of historical observation result association: data association or no data association.
  • the historical frame image data observation involved in the embodiment of the present invention refers to the observation result of the lane line that is detected before the current frame image data and is continuous and uninterrupted with the current frame image data.
  • the current frame of image data is the 5th frame
  • the 4th frame observation and the 3rd frame observation are compared to the 5th frame
  • the lane line observation result is continuous, and the time sequence is before the fifth frame lane line observation result. Therefore, the fourth frame observation and the third frame observation can be regarded as the historical frame image data observation of the fifth frame lane line observation result.
  • the current frame of image data is the fifth frame
  • the third frame of observation is compared with the fifth frame of lane line observation result
  • there is a discontinuity in timing The two are not continuous in frame timing. Therefore, the third frame observation cannot be used as the historical frame image data observation of the fifth frame lane line observation result.
  • the current frame of image data is the fifth frame
  • the fifth frame of lane line observation result is used as the starting frame, there is no historical frame image data observation before the lane line observation result of the current frame of image data.
  • the data association is used for the current frame image
  • Data association is performed on the lane line observation results of the data; on the contrary, if there is no single frame or multiple frame lane line observation results that meet the aforementioned conditions, data association is performed on the lane line observation results of the current frame of image data using no data association.
  • the current frame of image data is the starting frame. At this time, there is no historical frame image data observation result; in another possible scenario, the current frame of image data is compared with the previous frame or multiple frames. There are discontinuous and discontinuous observations; in another possible scenario, the observation results of historical frame image data are lost and cannot be obtained.
  • the first type is data association, which is used to perform data association on the lane line observation result in combination with historical frame image data observation.
  • using the data-related method to associate historical observation results can include the following steps:
  • the embodiment of the present invention provides a specific implementation method for obtaining the target lane line observation result: in at least two lane line dimensions, the cost value between the historical frame image data observation and the lane line observation result ( cost value), the cost value is used to characterize the difference of observation data in the dimensions of each lane line, and then, according to the at least two cost values, a cost matrix (cost matrix) is constructed, thereby obtaining the cost matrix
  • cost value between the historical frame image data observation and the lane line observation result
  • cost value is used to characterize the difference of observation data in the dimensions of each lane line
  • a cost matrix cost matrix
  • the optimal solution, and the lane line observation result corresponding to the optimal solution is used as the target lane line observation result.
  • the aforementioned lane line dimension is used to describe the line posture, which may specifically include at least two of the following: position, direction, and curvature.
  • the specific calculation method for obtaining the cost value is: in any lane line dimension, the difference between the lane line observation result of the current frame image data and the lane line observed in the historical frame image data is obtained.
  • the distance difference, angle difference, and curvature difference between the lane line observation result of the current frame image data and the lane line observation result of the historical frame image data can be obtained as the current frame image data
  • the cost value between the lane line observation result and the historical frame image data observation can be obtained as the current frame image data.
  • the observation results of the historical frame image data and the lane line observation results of the current frame image data both contain at least two lane lines.
  • any historical frame image data can be observed
  • the lane line of and any lane line in the lane line observation result of the current frame of image data are combined in pairs to obtain the cost value between this group of lane lines.
  • the cost value of a group of lane lines in at least two lane line dimensions can be used as a cost vector, combined with the cost vectors of other groups of lane lines, and finally a cost matrix is constructed.
  • the optimal solution finally obtained can represent the lane line correspondence between the historical frame image data observation and the lane line observation result of the current frame image data.
  • this processing method has high accuracy, but also due to the large amount of data processing, it will adversely affect the data processing efficiency to a certain extent.
  • any lane line or lane in the historical frame image data can be observed
  • the center line participates in the calculation of the aforementioned cost value.
  • the historical frame image data observation includes at least one of the following: lane center line and arbitrary lane line.
  • the lane center line is a virtual line in the lane line, which is located at the center of all lane lines and is parallel to the lane line.
  • the dashed line shown in Fig. 6 is the lane center line.
  • the aforementioned second scheme can also be used to obtain a target lane line corresponding to each lane line in the historical frame image data observation. ,No longer.
  • the cost matrix can be obtained. In this way, it is only necessary to obtain the optimal solution for the cost matrix.
  • the optimal solution considering that each element in the cost matrix is used to represent the difference between the historical frame image data observation and the lane line observation result of the current frame image data, therefore, in order to obtain The target lane line with the smallest degree of deviation can be solved by the cost matrix based on the principle of the minimum cost matrix to obtain the target lane line observation result.
  • S1064 Determine the target lane line observation result index information according to the correspondence between the target lane line observation result in the historical frame image data observation, and obtain the lane line detection result.
  • the index information inherits the index serial number observed by the historical frame image data.
  • the so-called inheriting the index number of historical frame image data observation refers to re-determining the index number of the lane line observation result of the current frame image data according to the sorting method and order of historical frame image data observation.
  • the index number of the historical frame image data is 0-100, and the index number of the current frame image data is 0-10, and according to the corresponding relationship of the target lane line observation result in the historical frame image data observation, It is determined that the index number 0 of the current frame image data corresponds to the index number 95 of the historical frame image data. According to this association relationship, the index number of the current frame image data can be re-determined as 95-105, so that the current frame image data The lane line observation result of, inherits the index number of the historical frame image data observation, and the two are merged according to the index number to obtain the lane line detection result.
  • the second type no data association, is used to perform data association on the lane line observation result when there is no historical frame image data observation.
  • the lane line of the current frame of image data can be based on the index offset between the lane line observation results
  • the line observation result reallocates index information to obtain the lane line detection result.
  • the index deviation is used to indicate the deviation of the index value between the lane lines in each single frame of observation data.
  • lane line detection can be achieved.
  • This detection method does not require expensive high-precision hardware instruments, nor does it need to build high-precision maps in advance, and can be implemented in real time on straight road sections that meet the foregoing prior information
  • the observation of lane lines has high flexibility and low hardware cost.
  • the method also includes the following steps:
  • S108 Perform fitting optimization on the lane line detection result to obtain an optimized lane line detection result.
  • the processing method shown in FIG. 7 can be further optimized on the basis of the detected rougher lane lines, so that the optimized lane lines can be output and displayed in the form of relatively smooth and beautiful lines.
  • the fitting optimization process can be realized through a preset fitting optimization model.
  • the input of the fitting optimization model is the index information of the lane line and the lane line
  • the output of the fitting optimization model is the center line of the lane line and the lane width. It can be seen that before performing step S108, the fitting optimization model needs to be preset or trained in advance.
  • the fitting optimization model may be a least squares model.
  • the fitting optimization model can be designed to fit the observation of each lane line of the current frame image data to be the best fit with the observation of the lane line in the historical frame image data within the aforementioned determined index range.
  • the problem of preventing overfitting needs to be further considered.
  • the input of the fitting optimization model may at least include: points related to the observation result of the lane line
  • the output of the fitting optimization model may at least include: the information of the lane width w and the polynomial of the lane centerline curve. )function.
  • widths of the three lanes are represented by w 1 , w 2 , and w 3 respectively, and then w in the above formula can be represented as follows.
  • represents the semantic offset vector of the lane, where ⁇ can be expressed as the transposed vector of the offset vector of all lane lines relative to the center line of the lane, which can be specifically expressed as:
  • ⁇ 1 represents the offset vector of the first lane line from left to right compared to the center line of the lane
  • ⁇ 2 represents the offset vector of the second lane line from left to right compared to the center line of the lane
  • ⁇ 3 represents the offset vector of the third lane line from left to right compared to the center line of the lane
  • ⁇ 4 represents the offset vector of the fourth lane line from left to right compared to the center line of the lane.
  • the optimized lane line detection result can be obtained.
  • the optimized lane line detection results can be directly applied to path planning or driving of unmanned vehicles.
  • the vehicle may be equipped with a vehicle control system (for example, a supercomputing platform including an autonomous vehicle), the lane line detection result may be sent to the vehicle control system, and the vehicle control system generates vehicle control instructions to control the vehicle based on the lane line detection result Movement, such as controlling the vehicle to stay in the middle of the lane, or controlling the vehicle to change lanes.
  • the lane line detection result can also be completed in the vehicle control system, so that it does not have to be sent to the vehicle control system, and there is no limitation here.
  • the embodiment of the present invention further provides an embodiment of a device that implements each step and method in the foregoing method embodiment.
  • the embodiment of the present invention provides a lane line detection device. Please refer to FIG. 8.
  • the lane line detection device 800 includes:
  • the obtaining module 81 is used to obtain image data including lane lines;
  • the processing module 82 is configured to process the image data according to the priori information of the lane line to obtain the lane line observation result of the current frame image data;
  • the correlation module 83 is used to correlate historical observation results of the lane line observation results to obtain the lane line detection result.
  • the prior information includes: parallelism between lane lines.
  • the a priori information further includes at least one of the following: the width, length, and spacing of the lane line meet their respective preset value ranges.
  • the processing module 82 includes:
  • An extraction sub-module (not shown in FIG. 8) is used to extract the skeleton of the image data according to the prior information to obtain the skeleton data of the lane line;
  • the screening sub-module (not shown in FIG. 8) is used to screen the skeleton data to obtain characteristic data of the lane line;
  • the fitting sub-module (not shown in FIG. 8) is used to perform line fitting on the characteristic data to obtain the lane line observation result of the current frame of image data.
  • the extraction sub-module (not shown in Figure 8) can be specifically used for:
  • skeleton extraction is performed on the top view data to obtain the skeleton data.
  • the screening sub-module (not shown in Figure 8) can be specifically used for:
  • the extraction sub-module is specifically configured to extract candidate feature data in the skeleton data through at least one of the following implementation methods: feature line extraction and fitting analysis.
  • the lane line detection device 800 may also include: a coordinate conversion module (not shown in FIG. 8), which is specifically used for:
  • the coordinate conversion is performed on the lane line observation result, so that the converted lane line observation result is in the world coordinate system.
  • the coordinate conversion module (not shown in Figure 8) is specifically used for:
  • the coordinate conversion module (not shown in Figure 8) is specifically used for:
  • the association module 83 specifically includes:
  • the data-free sub-module (not shown in FIG. 8) is used to perform data association on the lane line observation result when there is no historical frame image data observation.
  • the target lane line observation result index information determines the target lane line observation result index information to obtain the lane line detection result; wherein, the index information inherits the history The index number of the frame image data observation.
  • a data association sub-module (not shown in Figure 8) is also specifically used for:
  • lane line dimensions respectively acquiring the cost value between the historical frame image data observation and the lane line observation result, and the cost value is used to represent the difference of the observation data in the lane line dimensions;
  • the lane line dimensions include at least two of the following: position, direction, and curvature.
  • the index information is re-allocated for the lane line observation result to obtain the lane line detection result.
  • the historical frame image data observation involved in the embodiment of the present invention includes at least one of the following: a lane center line and any lane line.
  • the lane line detection device 800 may further include:
  • the fitting optimization module (not shown in FIG. 8) is used to perform fitting optimization on the lane line detection result to obtain the optimized lane line detection result.
  • the fitting optimization module (not shown in Figure 8) is specifically used for:
  • the optimized lane line detection result is generated.
  • the lane line detection result involved in the embodiment of the present invention includes at least one section of lane line.
  • the image data is collected by an image collecting device.
  • the lane line detection device 800 of the embodiment shown in FIG. 8 can be used to implement the technical solutions of the foregoing method embodiments. For its implementation principles and technical effects, please refer to the relevant descriptions in the method embodiments.
  • the lane line detection device 800 It can be a terminal or a server.
  • each module of the lane line detection device 800 shown in FIG. 8 is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; part of the modules can be implemented in the form of software called by the processing elements, and some of the modules can be implemented in the form of hardware.
  • the processing module 82 may be a separate processing element, or it may be integrated in the lane line detection device 800, for example, implemented in a chip of the terminal, and it may also be stored in the memory of the lane line detection device 800 in the form of a program.
  • a certain processing element of the lane line detection device 800 calls and executes the functions of the above modules.
  • the implementation of other modules is similar.
  • all or part of these modules can be integrated together or implemented independently.
  • the processing element described here may be an integrated circuit with signal processing capability.
  • each step of the above method or each of the above modules can be completed by hardware integrated logic circuits in the processor element or instructions in the form of software.
  • an embodiment of the present invention also provides a lane line detection device. Please refer to FIG. 9.
  • the lane line detection device 800 includes:
  • the instruction is stored in the memory 810 and is configured to be executed by the processor 820 to implement the method described in any implementation manner of the first embodiment.
  • the number of processors 820 in the lane line detection device 800 may be one or more, and the processors 820 may also be referred to as processing units, which may implement certain control functions.
  • the processor 820 may be a general-purpose processor or a special-purpose processor.
  • the processor 820 may also store instructions, and the instructions may be executed by the processor so that the lane line detection device 800 executes the method described in the above method embodiment.
  • the lane line detection device 800 may include a circuit, and the circuit may implement the sending or receiving or communication function in the foregoing method embodiment.
  • the number of the memory 810 in the lane line detection device 800 may be one or more, and the memory 810 has instructions or intermediate data stored thereon, and the instructions may be executed on the processor 820 so that the The lane line detection device 800 executes the method described in the foregoing method embodiment.
  • other related data may also be stored in the memory 810.
  • instructions and/or data may also be stored in the processor 820.
  • the processor 820 and the memory 810 can be provided separately or integrated together.
  • the lane line detection device 800 may further include:
  • the transceiver 830 is configured to receive the image data and the vehicle pose data.
  • the transceiver 830 may be referred to as a transceiver unit, a transceiver, a transceiver circuit, or a transceiver, etc., for implementing the transceiver function of the lane line detection device 800.
  • the transceiver 830 can further perform other corresponding communication functions.
  • the processor 820 may be used to complete corresponding determination or control operations, and optionally, may also store corresponding instructions in the memory 810. For the specific processing manner of each component, reference may be made to the related description of the foregoing embodiment.
  • the processor and transceiver described in this application can be implemented in integrated circuit (IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (ASIC), printed circuit board ( printed circuit board, PCB), electronic equipment, etc.
  • the processor and transceiver can also be manufactured using various 1C process technologies, such as complementary metal oxide semiconductor (CMOS), nMetal-oxide-semiconductor (NMOS), and P-type Metal oxide semiconductor (positive channel metal oxide semiconductor, PMOS), bipolar junction transistor (Bipolar Junction Transistor, BJT), bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), etc.
  • CMOS complementary metal oxide semiconductor
  • NMOS nMetal-oxide-semiconductor
  • PMOS bipolar junction transistor
  • BiCMOS bipolar CMOS
  • SiGe silicon germanium
  • GaAs gallium arsenide
  • the lane line detection device 800 may be an independent device or may be a part of a larger device.
  • the lane line detection system 1000 includes:
  • the lane line detection device 800 shown in FIG. 8 or FIG. 9 at least includes a memory 810 and a processor 820; the memory 810 is used to store instructions, and the processor 820 is used to execute the instructions, and Implement the method described in any implementation manner in Embodiment 1;
  • the image acquisition device 1010 is used to collect initial image data and send it to the processing device;
  • the processing device 1020 is configured to perform visual recognition processing on the initial image data, and send the processed image data to the lane line detection result device 800;
  • the pose sensor 1030 is used to collect pose data of the vehicle and send the pose data to the lane line detection device 800.
  • the pose sensor 1030 may include but is not limited to at least one of the following: an inertial measurement unit IMU or a visual odometer, where the visual odometer may include but is not limited to visual inertial fusion VINS.
  • the aforementioned CNN processor 1020 can be designed independently, or can be integrated with any one of the device 800, the image acquisition device 1010, and the pose sensor 1030, which is not particularly limited in the embodiment of the present invention.
  • the embodiment of the present invention further provides another possible design: the system includes:
  • the device 800 shown in FIG. 8 or FIG. 9 includes at least: a memory 810 and a processor 820; the memory 810 is used to store instructions, and the processor 820 is used to execute the instructions and implement Example 1: The method described in any implementation mode;
  • the image acquisition device 1010 is used to acquire initial image data and send the acquired initial image data to the device 800;
  • the pose sensor 1030 is used to collect pose data of the vehicle and send the pose data to the device 800.
  • an embodiment of the present invention provides a vehicle. Please refer to FIG. 11.
  • the vehicle 1100 includes the aforementioned lane line detection device 800.
  • the vehicle 1200 includes: a body 1020; a power system 1030 connected to the body to drive the vehicle to move; a vehicle control system 1010, Used to control the vehicle; and the aforementioned lane line detection system 1000.
  • FIG. 12 only exemplarily shows a relationship between the lane line detection system 1000 and the vehicle control system 1010. It should be noted that the lane line detection system 1000 may also be fully or partially integrated in the vehicle control system 1010. For example, one, more or all of the components of the memory, the processor, the image acquisition device, the processing device, or the sensor can be integrated or belong to the vehicle control system 1010. The separate illustration of the two in Figure 12 does not limit the two to be two separate systems.
  • an embodiment of the present invention provides a readable storage medium with instructions stored thereon, and the instructions are executed by a processor to implement the method as described in the first embodiment.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种车道线检测方法、装置、系统与车辆、存储介质。所述方法包括:通过获取包括车道线的图像数据(S102),然后,根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果(S104),从而,对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果(S106),能够在不借助高精度地图的前提下实现车道线检测,以降低自动驾驶车辆的安全性风险。

Description

车道线检测方法、装置、系统与车辆、存储介质 技术领域
本发明实施例属于智能交通技术领域,尤其涉及一种车道线检测方法、装置、系统与车辆、存储介质。
背景技术
随着智能交通技术的不断发展,依赖于图像的单帧车道线检测技术也趋于成熟。
目前,较为成熟的技术是基于预先离线构建的高精度地图实现的。首先,需要利用高精度的惯性测量模块(Inertial measurement unit,IMU)和激光雷达采集相关路段的地图信息,其次,需要对这些地图信息进行离线标注,形成高精度地图,从而,车辆在行驶过程中可以加载离线的高精度地图,并通过点云配准进行定位,如此,就能够获取到车辆所在区域的车道线信息。
但是,现有的车道线检测的方案受限于离线构建的高精度地图,若高精度地图出现区域缺漏或信息滞后等与实时环境不匹配的情况,将会导致自动驾驶车辆存在较大的安全性风险。
发明内容
本发明实施例提供一种车道线检测方法、装置、系统与车辆、存储介质,用以在不借助高精度地图的前提下实现车道线检测,以降低自动驾驶车辆的安全性风险。
第一方面,本发明实施例提供一种车道线检测方法,包括:
获取包括车道线的图像数据;
根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果;
对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果。
第二方面,本发明实施例提供一种车道线检测装置,包括:
获取模块,用于获取包括车道线的图像数据;
处理模块,用于根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果;
关联模块,用于对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果。
第三方面,本发明实施例提供一种车道线检测装置,包括:
存储器;
处理器;以及
指令;
其中,所述指令存储在所述存储器中,并被配置为由所述处理器执行以实现如第一方面所述的方法。
第四方面,本发明实施例提供一种车道线检测系统,包括:
如第三方面所述的车道线检测装置,包括:存储器与处理器;所述存储器用于存储指令,所述处理器用于执行所述指令,并实现如第一方面所述的方法;
图像采集装置,用于采集初始图像数据,并发送至处理装置;
所述处理装置,用于对所述初始图像数据进行视觉识别处理,并将视觉识别处理后的所述图像数据发送给所述车道线检测结果装置;
位姿传感器,用于采集车辆位姿数据,并将所述位姿数据发送至所述车道线检测结果装置。
第五方面,本发明实施例提供一种车辆,包括:如第三方面所述的车道线检测装置。
第六方面,本发明实施例提供一种车辆,包括:车身;
连接至所述车身的动力系统,用于驱动所述车辆运动;
车辆控制系统,用于控制所述车辆;以及
如第四方面所述的车道线检测系统。
第七方面,本发明实施例提供一种计算机可读存储介质,其特征在于,其上存储有指令,所述指令被处理器执行以实现如第一方面所述的方法。
本发明实施例提供的车道线检测方法、装置、系统与车辆、存储介质,从包含车道线的图像数据出发,结合先验信息对其进行处理,以得到当前帧图像数据的车道线观测结果,进而,通过历史观测结果关联,就可以得到与 历史帧图像数据相关联的完整的车道线检测结果。由于图像数据可以实时采集,只要图像采集设备能够采集到包含车道线的初始图像,就可以通过本方案实现车道线检测,并且,得到的车道线检测结果能够与实时环境相匹配,具备较高的灵活性;此外,相较于构建离线高精度地图的方案相比,对图像采集设备的要求不高,硬件成本较为低廉。因此,本发明实施例所提供的技术方案能够在不借助高精度地图的前提下实现车道线检测,降低了自动驾驶车辆的安全性风险。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1为本发明实施例提供的一种车道线检测方法的流程示意图;
图2为本发明实施例提供的另一种车道线检测方法的流程示意图;
图3为本发明实施例提供的另一种车道线检测方法的流程示意图;
图4为本发明实施例提供的另一种车道线检测方法的流程示意图;
图5为本发明实施例提供的另一种车道线检测方法的流程示意图;
图6为本发明实施例提供的一种车道线场景示意图;
图7为本发明实施例提供的另一种车道线检测方法的流程示意图;
图8为本发明实施例提供的一种车道线检测装置的结构示意图;
图9为本发明实施例提供的一种车道线检测装置的实体结构示意图;
图10为本发明实施例提供的一种车道线检测系统的结构示意图;
图11为本发明实施例提供的一种车辆的结构示意图;
图12为本发明实施例提供的另一种车辆的结构示意图。
通过上述附图,已示出本公开明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本公开构思的范围,而是通过参考特定实施例为本领域技术人员说明本公开的概念。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相 似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
首先,对本发明实施例具体的应用场景进行说明。本发明实施例所提供的技术方案具体应用于车道线的观测场景。进一步地,可以应用于无人驾驶车辆(或者称之为自动驾驶车辆)的行驶场景。或者,可进一步应用于车辆路径规划场景。
如前所述,现有的车道线观测一般是基于预先构建的高精度地图实现的。高精度地图的构建对硬件采集设备的精度要求较高,一般需要利用高精度的IMU和激光雷达采集数据,并通过人工进行离线标注才能形成高精度地图。而基于高精度地图的车道线观测,则需要配合车辆在行驶过程中的定位实现,通过与高精度地图之间的点云配准定位,从而匹配到车辆所在区域的车道线地图信息,以实现车道线观测。
基于现有的车道线观测依赖于高精度地图实现,则需要在实现车道线观测之前完成高精度地图的构建,但是,提前构建的高精度地图仍然存在与实时环境不匹配的情况。一方面,高精度地图的覆盖范围可能不够全面,也就是存在区域缺漏的情况,这就导致未提前构建高精度地图的区域就无法通过前述方实现车道线观测;另一方,高精度地图可能存在同步滞后的问题,例如,若路段进行施工或路面结构的改进修改,高精度地图针对这一情况的同步不及时,这就导致高精度地图所表示的信息与实际路面环境不符,原有的高精度地图无法适配当前的实际环境,导致依据高精度地图得到的车道线观测结果准确度较低。除此之外,高精度地图的构建还需要花费较大的硬件成本(对硬件精度的要求较高导致成本较高)、人工成本(需要人工标注)和时间成本(提前测量数据的时间以及标注时间等)。
综上,现有的车道线观测结果受限于高精度地图,若其与实际路况环境不匹配,则导致无法观测到车道线或观测到的车道线准确率较低,而车道线观测结果作为无人驾驶车辆行驶过程中的一个重要依据,若其不准确适配环境,则会导致较大的安全性风险。
本发明提供的车道线检测方案,旨在解决现有技术的如上技术问题,并提出如下解决思路:利用车辆所装设的图像采集设备进行图像采集,并对图 像中包含的车道线进行处理,得到当前帧图像数据的车道线观测结果,并在时序上对各车道线观测结果进行融合,即可得到车道线地图。
下面以具体地实施例对本发明的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本发明的实施例进行描述。
实施例一
本发明实施例提供了一种车道线检测方法。请参考图1,该方法包括如下步骤:
S102,获取包括车道线的图像数据。
本发明实施例中,图像数据为包含车道线的图像数据。其中,本发明实施例所设计到的车道线包括但不限于:车道边界线。车道边界线指的是一条车道单侧或两侧的在地面上有刷涂的车道线,其可以包括单虚线、单实线、双实线等以及白色、黄色等车道边界线。
具体而言,图像数据可通过任意图像采集装置采集得到,其中,图像采集装置可具体设置于车辆中。本发明实施例对于图像采集装置的采集精度等属性无特殊限定,在具体实现本方案时,可通过如灰度摄像头、彩色摄像头等可采集到路面情况的图像采集装置采集得到。换言之,本发明实施例在实现时无需再花费额外的硬件成本,仅利用当前车辆中普遍设置的图像采集装置即可实现,相较于利用高精度IMU和激光雷达构建高精度地图的方案,能够有效的降低硬件成本。
在一些特殊的场景中,图像采集装置采集到的数据中可能不包含车道线,则针对不包含车道线的图像数据,则可不作后续处理。
S104,根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果。
车道线的先验信息用于进行验证来确认什么是车道线,换言之,车道线具备的特征。
在一种可能的设计中,本发明实施例可以基于车道线的平行性为基础实现,此时,先验信息可以包括但不限于:车道线之间的平行性。所谓平行性是指多条车道线之间相互平行的程度,在一些情况下,平行性可以通过两条 车道线之间的方向夹角来衡量。例如,直行路段的车道线之间,以及正常行驶弯道路段的车道线之间,通常可以认为是满足相互平行的。可以理解的是,车道线之间相互平行并不指的是车道线为直线;在前述的弯道路段上,若车道线的点或局部的方向是相互一致的,同样可以认为其是相互平行的。因此,在该先验信息的基础上,本发明实施例所提供的方案可具体应用于大部分行驶路段的车道线检测场景。
另一种可能的设计中,先验信息可以自定义设置。具体的,车道线的先验信息还可以包括但不限于如下至少一种:车道线的宽度、长度、间距。当先验信息符合预先自定义的预设取值范围时,可以认为其符合车道线的特征。也就是,可以将先验信息处理自定义为:车道线的宽度满足预设宽度范围,例如,车道线的宽度即刷涂线条的宽度可以为20cm;车道线长度满足预设长度范围,例如,对于虚线车道线其每一段车道线的长度可以为3m;车道线之间的间距满足预设间距范围,例如,对于不同道路而言车道的宽度,即车道线之间的间距可以为3-3.5m。可以将车道线的先验信息需求设定为满足至少一个条件或满足全部条件。这种先验信息可不受到直行路段或弯道路段的影响,在不同应用场景下均可适用。因此,在一种可能的设计中,前述两种设计所述的先验信息可以在同一场景下共同使用,不再赘述。
当车道线的先验信息符合预设的条件时,可以认为所述图像数据中的所处理得到的线为观测得到的车道线,并获得车道线观测结果。对于当前帧图像数据而言,其可以包括先前已经经过处理,例如视觉识别得到的车道线区域,在经过先验信息验证后,将符合条件的区域确定为观测的车道线区域,从而得到当前帧图像数据的车道线观测结果。
S106,对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果。
其中,得到的车道线检测结果包括至少一段车道线。而S106之前的前述步骤实现了对当前帧图像数据图像的车道线观测结果,而S106步骤则是为了实现车道线观测结果在时序上的融合。因此,该步骤所述的“历史观测结果关联”是为了将前述步骤得到的当前帧图像数据的车道线观测结果与历史帧图像数据观测相融合。具体而言,在具体实现时还依据历史帧图像数据观测的不同情况,采用不同的手段,得到车道线检测结果,后续详述。
如此,通过前述设计,本发明实施例从包含车道线的图像数据出发,结合先验信息对其进行处理,以得到当前帧图像数据的车道线观测结果,进而,通过历史观测结果关联,就可以得到与历史帧图像数据相关联的完整的车道线检测结果。由于图像数据可以实时采集,只要图像采集设备能够采集到包含车道线的初始图像,就可以通过本方案实现车道线观测结果,并且,得到的车道线观测结果能够与实时环境相匹配,具备较高的灵活性;此外,相较于构建离线高精度地图的方案相比,对图像采集设备的要求不高,硬件成本较为低廉。因此,本发明实施例所提供的技术方案能够在不借助高精度地图的前提下实现车道线检测结果,降低了自动驾驶车辆的安全性风险。
以下,就前述方法的具体实现方式进行具体说明。
如前所述,S102步骤可以通过接收或获取图像采集装置采集到的包含车道线的图像数据的方式实现。其中,图像采集装置可以设置为受到本方法的执行主体(以下,为了便于说明,简称为车道线检测装置)的控制而采集图像,或者,也可以设置为自动采集图像。其中,图像采集装置也可以采用自定义或预设的模式工作,本发明实施例对此无特殊限定,例如,图像采集装置可以实时地持续地采集并输出图像,或者,图像采集装置可以间歇式工作并输出图像(当应用于实时路径规划或实时行驶时,该方式需设置较小的时间间隔,以不影响车辆行驶为准)。
此外,在一种可能的设计中,本发明实施例所涉及到的前述图像数据为经过视觉识别处理之后得到的。也就是,前述图像采集装置在采集到初始图像数据之后,将初始图像数据发送给处理装置,由处理装置对初始图像数据执行视觉识别处理,之后,将视觉识别处理后的图像数据发送给该车道线检测结果装置。具体的,处理装置可以将初始图像数据输入CNN神经网络进行计算,识别出图像中属于车道线的区域。
需要说明的是,本发明实施例对于处理装置所执行的视觉识别模型无特别限定,其主要用于将包含车道线的初始图像数据进行车道线特征提取,得到包含车道线特征的图像数据。
在这种设计中,该处理装置可以单独设计;也可以与该车道线检测结果装置集成设计,例如,车道线检测结果装置中可包括两个处理器,一个用于对初始图像数据执行视觉识别处理,另一个处理器用于执行本设计中如图1 所示的车道线检测结果方法;或者,又例如,车道线检测结果装置中仅设置一个处理器,由该处理器执行前述方案。此外,该处理装置也可集成于其他任一设备或器件中,如前述图像处理装置,不再赘述。
请参考图2,其示出了实现当前帧图像数据的车道线观测结果场景中,S104步骤的具体实现方式,此时,包括如下步骤:
S1042,根据所述先验信息,对所述图像数据进行骨架提取,得到车道线的骨架数据。
由于前述图像数据是通过图像采集装置采集得到的,其可具体表现为像素点,而该步骤,就是在这些像素点中提取出车道线的骨架数据,也就是,有可能构成车道线的像素点数据。
而具体的骨架提取方式,则可以在图像数据的所有像素点数据中,按照先验信息进行筛选,以提取出所有满足前述先验信息的像素点,以作为骨架数据即可。该步骤能够有效缩减数据量,提高后续处理步骤的处理效率。
其中,考虑到图像采集装置与车辆之间的设置方式可能各不相同,在一些可能的方式中,图像数据为俯视图像。
在另一可能的设计中,图像数据为其他视角的图像。此时,还需要在执行骨架数据的提取之前,预先对图像数据进行视角转换,以得到所述图像数据在俯视视角(bird view)下的俯视数据,从而,再根据所述先验信息,对所述俯视数据进行骨架提取,得到所述骨架数据。骨架提取的方式如前,不再赘述。而视角转换至俯视视角则可以通过投影的方式实现。
S1044,对所述骨架数据进行筛选,得到所述车道线的特征数据。
由于前述步骤得到的骨架数据并不能直接得到车道线,因此,需要对其进行进一步筛选,以得到符合先验信息的车道线的特征数据,进而,对特征数据进行拟合,才能得到当前帧图像数据的车道线观测结果。
具体而言,该步骤可以通过如下至少一种实现方式实现:特征提取与聚类分析。其中,特征提取用于在骨架数据中提取出满足先验信息的特征数据;其实现方式可以包括但不限于:特征线提取与拟合分析提取中的至少一种。其中,特征线提取方式可以包括但不限于:霍夫变换。而聚类分析用于对骨架数据进行聚类,以得到满足先验信息的特征数据。
可知,前述两种特征提取方式在具体实现时可以同时采用,换言之,可 以采取双重特征提取方式,来实现对车道线特征数据的提取。当采用双重特征提取方式时,对两种实现方式的执行时序无特殊限定,可同时执行并取交集;或者,可顺序执行,从而,在第一重特征提取得到的候选特征数据中进行第二重特征提取,并进而得到最终的车道线的特征数据。
S1046,对所述特征数据进行线条拟合,得到当前帧图像数据的所述车道线观测结果。
通过前述步骤得到的特征数据即为可构成车道线的像素点数据,由此,只需要对这些像素点数据进行线条拟合,即可得到车道线线条。
在一个更具体的实现场景中,图3示出了一种更具体的获取到当前帧图像数据的车道线观测结果的处理流程,此时,S104还可具体为如下步骤:
S10422,对所述图像数据进行视角转换,得到所述图像数据的俯视数据。
S10424,根据所述先验信息,对所述俯视数据进行骨架提取,得到所述骨架数据。
S10442,对所述骨架数据进行霍夫变换,以提取出所述骨架数据中的候选特征数据。
S10444,根据所述先验信息,对所述候选特征数据进行聚类分析,得到所述车道线的所述特征数据。
S1046,对所述特征数据进行线条拟合,得到当前帧图像数据的所述车道线观测结果。
通过如图2或图3所示的方式,可以从当前帧图像数据图像中获取到当前帧图像数据的车道线观测结果。此外,在具体的应用场景中,图像数据可以为连续或间断的多帧数据,则针对每一帧图像数据分别进行前述处理(作为当前帧图像数据),即可得到各帧分别对应的车道线观测结果。之后,再对各当前帧图像数据的车道线观测结果进行时序上的关联即可得到车道线观测结果。
除此之外,前述步骤得到的当前帧图像数据的车道线观测结果还处于局部的车体坐标系下,而车辆可能是移动的,因此,为了便于执行各帧车道线观测结果在时序上的关联融合,还需要进一步在执行S106之前,执行如图4所示的步骤:
S105,对当前帧图像数据的所述车道线观测结果进行坐标转换,使得转 换后的车道线观测结果位于世界坐标系下。
具体而言,该坐标转换是将车道线观测结果由局部的车体坐标系转换为世界坐标系,简言之,该步骤相当于Local to Global坐标系的转换。
可知,在该坐标转换过程中,车辆的位置和姿态(简称之为位姿)则直接影响该坐标转换结果。此时,S105包括如下步骤:
获取车辆的位姿信息;
结合所述位姿信息,对所述车道线观测结果进行坐标转换。
其中,车辆的位姿信息包括:位置信息与姿态信息,其中,位置信息可以表示为坐标,而姿态信息可以表示但不限于表示为:俯仰角(pitch)、偏航角(yaw)、翻滚角(roll)。
本发明实施例中所涉及到的位姿信息可以通过位姿传感器获取得到。具体而言,本发明实施例所涉及到的位姿传感器可以包括但不限于如下至少一种:惯性测量单元IMU与视觉里程计。
在一个具体的设计中,若位姿信息通过IMU采集得到,则前述获取步骤可直接接收或主动获取IMU采集到的数据即可实现。本发明实施例对于IMU的具体工作方式不作赘述。以及,本发明实施例对于IMU的检测精度等属性亦无特殊限定,因此,无需采用特殊的高成本IMU即可实现检测,可有效降低硬件成本。
在另一个可能的设计中,视觉里程计的实现方式可以包括但不限于:视觉惯性融合(Visual-Inertial Integration System,VINS)。其与本车道线检测结果装置之间的数据交互方式同前,不再赘述。
除此之外,在一些车辆中还涉及其他方式实时采集到车辆的位姿信息,此时,还可以通过向车辆的主控制器来请求或获取位姿信息。
具体而言,前述步骤中获取到的位姿信息为实时的车辆位姿信息,这有利于实现实时的车道线观测结果。当然,在一些可能的场景中,位姿信息也可以是非实时的,此时,要求位姿信息与前述图像数据之间具备对应的帧关系,以便于在该步骤的处理时能够以该对应的帧关系来获取到与当前帧图像数据的车道线观测结果对应的位姿信息,进而实现坐标转换。
具体而言,结合所述位姿信息,对所述车道线观测结果进行坐标转换的方法可以为:
在前述步骤中可以是已经获取了车道线观测结果视角转换后的俯视视角(bird view)图,因此可以将该俯视视角图转换至相机坐标系下;由于车辆上相机和IMU的安装位置可以认为是固定的,因此其二者之间存在确定的位置关系,从而可以再次将车道线观测结果转换至IMU坐标系下;最后可结合车辆的所述位姿信息,将车道线观测结果转换至世界坐标系下,从而实现后续的统一处理。
通过如上步骤,可以实现局部车体坐标系到世界坐标系的坐标转换,如此,在执行后续的历史观测结果关联处理时,使得各单帧的车道线观测结果能够在同一个坐标系下,更便于后续处理。
在此基础上,基于当前帧图像数据的车道线观测结果是否具备历史帧图像数据观测,本发明实施例给出历史观测结果关联的处理方式:有数据关联或者无数据关联。
本发明实施例所涉及到的历史帧图像数据观测是指当前帧图像数据之前检测到的且与当前帧图像数据之间连续未间断的车道线观测结果。
举例说明,若当前帧图像数据为第5帧,若第5帧车道线观测结果之前还存在第4帧观测与第3帧观测,而第4帧观测与第3帧观测相较于第5帧车道线观测结果连续,且时序上在第5帧车道线观测结果之前,因此,第4帧观测与第3帧观测可作为第5帧车道线观测结果的历史帧图像数据观测。又例如,若当前帧图像数据为第5帧,若第5帧车道线观测结果之前还存在第3帧观测,而第3帧观测相较于第5帧车道线观测结果存在时序上的间断,二者在帧时序上不连续,因此,第3帧观测不能作为第5帧车道线观测结果的历史帧图像数据观测。又例如,若当前帧图像数据为第5帧,若第5帧车道线观测结果作为起始帧,则该当前帧图像数据的车道线观测结果之前无历史帧图像数据观测。
本发明实施例中,若具备满足前述条件的单帧或多帧的车道线观测结果,则可将其作为当前帧图像数据的历史帧图像数据观测,此时,采用有数据关联对当前帧图像数据的车道线观测结果进行数据关联;反之,若不存在满足前述条件的单帧或多帧的车道线观测结果,采用无数据关联对当前帧图像数据的车道线观测结果进行数据关联。
需要说明的是,当前帧图像数据的车道线观测结果不存在历史帧图像数 据观测的具体情况可能有多种。一种可能的场景中,当前帧图像数据即为起始帧,此时,无历史帧图像数据观测结果;另一种可能的场景中,当前帧图像数据与之前的某一帧或多帧的观测存在间断、不连续的情况;另一种可能的场景中,历史帧图像数据观测结果丢失,无法被获取到。
以下,分别对两种历史观测结果关联方式进行具体说明。
第一种,有数据关联,用于结合历史帧图像数据观测对所述车道线观测结果进行数据关联。
请参考图5,采用有数据关联方式进行历史观测结果关联可以包括如下步骤:
S1062,在所述车道线观测结果中,获取与所述历史帧图像数据观测之间匹配程度最高的目标车道线观测结果。
具体而言,由于在前述步骤中得到的当前帧图像数据的车道线观测结果中,存在至少一条车道线,为了便于进行关联,需要在其中确定一条目标车道线观测结果,该目标车道线观测结果在当前帧图像数据的观测与历史帧图像数据观测之间需具备较高的匹配程度。
本发明实施例给出获取目标车道线观测结果的一种具体实现方式:在至少两个车道线维度上,分别获取所述历史帧图像数据观测与所述车道线观测结果之间的代价值(cost值),所述代价值用于表征观测数据在各车道线维度上的差别情况,然后,根据所述至少两个代价值,构造代价矩阵(cost matrix),从而,获取所述代价矩阵的最优解,并将所述最优解对应的车道线观测结果作为所述目标车道线观测结果。
其中,前述车道线维度用以描述线条姿态,其具体可以包括如下至少两种:位置、方向和曲率。其中,获取cost值的具体计算方式为:在任一车道线维度,获取当前帧图像数据的车道线观测结果与历史帧图像数据观测中的车道线之间的差值。例如,一种可能的设计中,可以获取当前帧图像数据的车道线观测结果与历史帧图像数据观测中的车道线之间的距离差、角度差与曲率差值,以作为当前帧图像数据的车道线观测结果与历史帧图像数据观测之间的cost值。
如前所述,历史帧图像数据观测与当前帧图像数据的车道线观测结果中均包含至少两条车道线,在执行前述获取cost值的获取步骤时,可以将任一 历史帧图像数据观测中的车道线与任一当前帧图像数据的车道线观测结果中的车道线,两两组合以获取这一组车道线之间的cost值。而一组车道线在至少两个车道线维度上的cost值可以作为一个cost向量,并结合其他组车道线的cost向量,最终构建出cost matrix。这种处理方式中,最终求解得到的最优解可以表征历史帧图像数据观测与当前帧图像数据的车道线观测结果之间的车道线两两对应关系。而这种处理方式由于数据量较大,具备较高的精确度,但是,也由于数据处理量较大,在一定程度上会对数据处理效率有不利影响。
因此,出于提高处理效率、降低数据处理量的考虑,在满足前述先验信息中所提到的车道线的平行性的场景中,可以将历史帧图像数据观测中的任意一条车道线或者车道中心线参与前述获取cost值的计算。此时,历史帧图像数据观测包括如下至少一种:车道中心线、任意车道线。其中,车道中心线为车道线中的一种虚拟线条,位于所有车道线的中心位置且与车道线平行的线条。如图6所示,在一条3车道的路段上,共具备4条车道线,而图6中所示虚线即为车道中心线。
通过这种处理,在执行前述cost值的计算时,只需要计算当前帧图像数据的车道线观测结果中每条车道线与该车道中心线(或者任意一条车道线)之间的至少两个cost值,得到每条车道线对应的cost向量,进而,构建出cost matrix。可知,这种处理方式的最优解为与该历史帧图像数据观测的车道中心线(或者任意一条车道线)之间匹配程度最高的目标车道线观测结果。这种基于强平行假设的场景中,通过如上处理,可以有效缩减数据处理量,提高车道线观测结果的检测效率,也更有利于车辆实时获取到车道线观测结果,有利于提高直行路段上的车辆安全性。
可知,在以历史帧图像数据观测的所有车道线均参与cost矩阵的情况下,也可以采用前述第二种方案,分别获取历史帧图像数据观测中每条车道线各自分别对应的一条目标车道线,不再赘述。
基于前述任一处理,可以得到cost matrix,如此,仅需对该cost矩阵求取最优解即可。其中,在具体获取最优解的过程中,考虑到代价矩阵中的各元素均用以表征历史帧图像数据观测与当前帧图像数据的车道线观测结果之间的差别情况,因此,为了获取到偏差程度最小的目标车道线,可以以cost 矩阵的代价总和最小为原则,对cost矩阵进行求解,以得到目标车道线观测结果。
S1064,根据所述目标车道线观测结果在所述历史帧图像数据观测中的对应关系,确定所述目标车道线观测结果索引信息,得到所述车道线检测结果。
其中,所述索引信息继承所述历史帧图像数据观测的索引序号。所谓继承历史帧图像数据观测的索引序号,是指根据历史帧图像数据观测的排序方式及次序,重新确定当前帧图像数据的车道线观测结果的索引序号。
举例说明,若历史帧图像数据的索引序号为0~100,而当前帧图像数据的索引序号为0~10,而根据目标车道线观测结果在所述历史帧图像数据观测中的对应关系,可以确定当前帧图像数据的索引序号0对应于历史帧图像数据的索引序号95,则根据这种关联关系,可以将当前帧图像数据的索引序号重新确定为95~105,如此,将当前帧图像数据的车道线观测结果继承了历史帧图像数据观测的索引序号,二者按照该索引序号进行融合,即可得到车道线检测结果。
第二种,无数据关联,用于在无所述历史帧图像数据观测时,对所述车道线观测结果进行数据关联。
若当前帧图像数据的车道线观测结果无历史帧图像数据观观测,则在执行时序关系上的融合时,可以根据车道线观测结果之间的索引偏移情况,来为当前帧图像数据的车道线观测结果重新分配索引信息,得到所述车道线检测结果。其中,索引偏移情况用于指示各单帧观测数据中各车道线之间的索引值偏差情况。
举例说明,可以选择一条车道线作为基准,然后,将其他车道线的起点在法向上投影至该车道线上,从而,计算各其他车道线与该基准车道线之间的索引偏差offset,从而,实现各车道线的对齐。
通过前述任意一种实现方式,均可实现车道线检测,这种检测方式无需借助昂贵的高精度硬件仪器,也无需提前构建高精度地图,就能够在满足前述先验信息的直行路段实时地实现对车道线的观测,具备较高的灵活性,且硬件成本低廉。
除此之外,为了进一步提高前述步骤得到的车道线检测结果精度,请参考图7所示,该方法还包括如下步骤:
S108,对所述车道线检测结果进行拟合优化,得到优化后的车道线检测结果。
如图7所示的处理方式能够在检测到的较为粗糙的车道线基础上作进一步优化,以使得优化后的车道线能够以较为平滑、美观的线条形式输出并显示。
具体而言,该拟合优化过程可以通过预设的拟合优化模型实现。其中,拟合优化模型的输入为车道线与车道线的索引信息,而拟合优化模型的输出为该车道线的中心线与车道宽度。可知,在执行S108步骤之前,还需要提前预设或训练该拟合优化模型。
具体而言,该拟合优化模型可以为最小二乘法模型。
在一种可能的设计中,该拟合优化模型可以设计为:在前述确定的索引范围内,将当前帧图像数据的各车道线的观测拟合为与历史帧图像数据中车道线的观测最接近的曲线,且在该拟合优化模型中,还需要进一步考虑防止过拟合的问题。其中,该拟合优化模型的输入至少可以包括:与车道线观测结果的点(points),而该拟合优化模型的输出至少可以包括:车道宽度w的信息与车道中心线曲线的多项式(polynomial)函数。
具体的,以图6所示的路段而言,可知,三条车道的宽度分别以w 1、w 2、w 3进行表示,则上式中的w可以按照如下方式表示。
Figure PCTCN2019074962-appb-000001
在如图6所示的路段场景中,α表示车道语义偏移向量,其中,α可表示为所有车道线相对于车道中心线的偏移向量的转置向量,其可具体表示为:
Figure PCTCN2019074962-appb-000002
其中,α 1表示由左至右的第一条车道线相较于车道中心线的偏移向量,α 2表示由左至右的第二条车道线相较于车道中心线的偏移向量,α 3表示由左至右的第三条车道线相较于车道中心线的偏移向量,α 4表示由左至右的第四条车道线相较于车道中心线的偏移向量。具体在图6所示场景中,可具体表示为:
Figure PCTCN2019074962-appb-000003
如此,在前述已预设好的拟合优化模型的基础上,在执行该拟合优化步骤时,仅需将所述车道线检测结果与所述索引信息作为预设拟合优化模型的输入,并获取所述拟合优化模型的输出,得到所述车道线检测结果所指示的车道中心线与车道宽度;进而,根据所述车道宽度与所述车道中心线,生成所述优化车道线检测结果。
通过前述步骤,即可得到优化后的车道线检测结果。该优化后的车道线检测结果可直接应用于路径规划或无人驾驶车辆的行驶。例如,车辆可以搭载有车辆控制系统(例如包括自动驾驶车辆的超算平台),车道线检测结果可以被发送至车辆控制系统,车辆控制系统从而根据该车道线检测结果生成车辆控制指令来控制车辆运动,例如控制车辆保持在车道中间,或者控制车辆实现变换车道等。可以理解的是,车道线检测结果也可以是在车辆控制系统中完成的,从而不必被发送至车辆控制系统,此处并不作限制。
可以理解的是,上述实施例中的部分或全部步骤骤或操作仅是示例,本申请实施例还可以执行其它操作或者各种操作的变形。此外,各个步骤可以按照上述实施例呈现的不同的顺序来执行,并且有可能并非要执行上述实施例中的全部操作。
实施例二
基于上述实施例一所提供的车道线检测方法,本发明实施例进一步给出实现上述方法实施例中各步骤及方法的装置实施例。
本发明实施例提供了一种车道线检测装置,请参考图8,该车道线检测装置800,包括:
获取模块81,用于获取包括车道线的图像数据;
处理模块82,用于根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果;
关联模块83,用于对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果。
一种可能的设计中,所述先验信息包括:车道线之间的平行性。
另一种可能的设计中,所述先验信息还包括如下至少一种:车道线的宽 度、长度、间距满足各自对应的预设取值范围。
基于前述任一种设计,处理模块82,包括:
提取子模块(图8未示出),用于根据所述先验信息,对所述图像数据进行骨架提取,得到车道线的骨架数据;
筛选子模块(图8未示出),用于对所述骨架数据进行筛选,得到所述车道线的特征数据;
拟合子模块(图8未示出),用于对所述特征数据进行线条拟合,得到当前帧图像数据的所述车道线观测结果。
其中,提取子模块(图8未示出),可具体用于:
对所述图像数据进行视角转换,得到所述图像数据的俯视数据;
根据所述先验信息,对所述俯视数据进行骨架提取,得到所述骨架数据。
其中,筛选子模块(图8未示出),可具体用于:
提取所述骨架数据中的候选特征数据;
根据所述先验信息,对所述候选特征数据进行聚类分析,得到所述车道线的所述特征数据。
具体而言,提取子模块具体用于通过如下至少一种实现方式,提取所述骨架数据中的候选特征数据:特征线提取与拟合分析。
除此之外,该车道线检测装置800还可以包括:坐标转换模块(图8未示出),具体用于:
在对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果之前,对所述车道线观测结果进行坐标转换,使得转换后的车道线观测结果位于世界坐标系下。
具体而言,坐标转换模块(图8未示出),具体用于:
获取车辆的位姿信息;
结合所述位姿信息,对所述车道线观测结果进行坐标转换。
其中,坐标转换模块(图8未示出),具体用于:
获取惯性测量单元IMU采集到的所述车辆的所述位姿信息。
本发明实施例中,关联模块83,具体包括:
有数据关联子模块(图8未示出),用于结合历史帧图像数据观测对所述车道线观测结果进行数据关联;或者,
无数据关联子模块(图8未示出),用于在无所述历史帧图像数据观测时,对所述车道线观测结果进行数据关联。
一种可能的设计中,有数据关联子模块(图8未示出),具体用于:
在所述车道线观测结果中,获取与所述历史帧图像数据观测之间匹配程度最高的目标车道线观测结果;
根据所述目标车道线观测结果在所述历史帧图像数据观测中的对应关系,确定所述目标车道线观测结果索引信息,得到所述车道线检测结果;其中,所述索引信息继承所述历史帧图像数据观测的索引序号。
其中,有数据关联子模块(图8未示出)还具体用于:
在至少两个车道线维度上,分别获取所述历史帧图像数据观测与所述车道线观测结果之间的代价值,所述代价值用于表征观测数据在各车道线维度上的差别情况;
根据所述至少两个代价值,构造代价矩阵;
获取所述代价矩阵的最优解,并将所述最优解对应的车道线观测结果作为所述目标车道线观测结果。
本发明实施例中,所述车道线维度包括如下至少两种:位置、方向和曲率。
一种可能的设计中,无数据关联子模块(图8未示出),具体用于:
获取所述车道线观测结果的索引偏移情况;其中,所述索引偏移情况用于指示各单帧观测数据中各车道线之间的索引值偏差情况;
根据所述索引偏移情况,为所述车道线观测结果重新分配索引信息,得到所述车道线检测结果。
其中,本发明实施例所涉及到的所述历史帧图像数据观测包括如下至少一种:车道中心线、任意车道线。
此外,该车道线检测装置800还可以包括:
拟合优化模块(图8未示出),用于对所述车道线检测结果进行拟合优化,得到优化后的车道线检测结果。
一种可能的设计中,拟合优化模块(图8未示出)具体用于:
将所述车道线检测结果与所述索引信息作为预设拟合优化模型的输入,并获取所述拟合优化模型的输出,得到所述车道线检测结果所指示的车道中 心线与车道宽度;
根据所述车道宽度与所述车道中心线,生成所述优化车道线检测结果。
本发明实施例所涉及到的所述车道线检测结果包括至少一段车道线。
其中,所述图像数据通过图像采集装置采集得到。
图8所示实施例的车道线检测装置800可用于执行上述方法实施例的技术方案,其实现原理和技术效果可以进一步参考方法实施例中的相关描述,可选的,该车道线检测装置800可以是终端或服务器。
应理解以上图8所示的车道线检测装置800的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块以软件通过处理元件调用的形式实现,部分模块通过硬件的形式实现。例如,处理模块82可以为单独设立的处理元件,也可以集成在车道线检测装置800,例如终端的某一个芯片中实现,此外,也可以以程序的形式存储于车道线检测装置800的存储器中,由车道线检测装置800的某一个处理元件调用并执行以上各个模块的功能。其它模块的实现与之类似。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。
此外,本发明实施例还提供了一种车道线检测装置,请参考图9,该车道线检测装置800,包括:
存储器810;
处理器820;以及
指令;
其中,所述指令存储在所述存储器810中,并被配置为由所述处理器820执行以实现如实施例一任一实现方式所述的方法。
其中,所述车道线检测装置800中处理器820的数目可以为一个或多个,处理器820也可以称为处理单元,可以实现一定的控制功能。所述处理器820可以是通用处理器或者专用处理器等。在一种可选地设计中,处理器820也可以存有指令,所述指令可以被所述处理器运行,使得所述车道线检测装置 800执行上述方法实施例中描述的方法。在又一种可能的设计中,车道线检测装置800可以包括电路,所述电路可以实现前述方法实施例中发送或接收或者通信的功能。
其中,所述车道线检测装置800中存储器810的数目可以为一个或多个,存储器810,其上存有指令或者中间数据,所述指令可在所述处理器820上被运行,使得所述车道线检测装置800执行上述方法实施例中描述的方法。可选地,所述存储器810中还可以存储有其他相关数据。可选地,处理器820中也可以存储指令和/或数据。
所述处理器820和存储器810可以单独设置,也可以集成在一起。
一种可能的设计中,所述车道线检测装置800还可以包括:
收发器830,所述收发器830用于接收所述图像数据与所述车辆位姿数据。
可知,本方面实施例中,所述收发器830可以称为收发单元、收发机、收发电路、或者收发器等,用于实现车道线检测装置800的收发功能。
此外,收发器830还可以进一步完成其他相应的通信功能。而处理器820可用于完成相应的确定或者控制操作,可选的,还可以在存储器810中存储相应的指令。各个部件的具体的处理方式可以参考前述实施例的相关描述。
本申请中描述的处理器和收发器可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。该处理器和收发器也可以用各种1C工艺技术来制造,例如互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)、N型金属氧化物半导体(nMetal-oxide-semiconductor,NMOS)、P型金属氧化物半导体(positive channel metal oxide semiconductor,PMOS)、双极结型晶体管(Bipolar Junction Transistor,BJT)、双极CMOS(BiCMOS)、硅锗(SiGe)、砷化镓(GaAs)等。
本发明实施例中,车道线检测装置800可以是独立的设备或者可以是较大设备的一部分。
进一步的,本发明实施例提供了一种车道线检测系统,请参考图10,该车道线检测系统1000,包括:
如图8或图9所示的车道线检测装置800,该装置800至少包括:存储器810与处理器820;所述存储器810用于存储指令,所述处理器820用于执行所述指令,并实现如实施例一任一实现方式所述的方法;
图像采集装置1010,用于采集初始图像数据,并发送至处理装置;
所述处理装置1020,用于对所述初始图像数据进行视觉识别处理,并将处理后的所述图像数据发送至所述车道线检测结果装置800;
位姿传感器1030,用于采集所述车辆的位姿数据,并将所述位姿数据发送至所述车道线检测装置800。
如前所述,一种可能的设计中,所述位姿传感器1030可以包括但不限于如下至少一种:惯性测量单元IMU或视觉里程计,其中,视觉里程计可以包括但不限于视觉惯性融合VINS。
此外,如前所述,前述CNN处理器1020可以独立设计,或者,可以与装置800、图像采集装置1010与位姿传感器1030中的任意一个集成设计,本发明实施例对此无特别限定。
此外,在一种可能的设计中,本发明实施例还进一步给出另一种可能的设计:该系统,包括:
如图8或图9所示的装置800,该装置800至少包括:存储器810与处理器820;所述存储器810用于存储指令,所述处理器820用于执行所述指令,并实现如实施例一任一实现方式所述的方法;
图像采集装置1010,用于采集初始图像数据,并将采集到的初始图像数据发送至所述装置800;
位姿传感器1030,用于采集所述车辆的位姿数据,并所述位姿数据发送至所述装置800。
更进一步的,本发明实施例提供了一种车辆,请参考图11,该车辆1100包括:前述车道线检测装置800。
更进一步的,本发明实施例提供了一种车辆,请参考图12,该车辆1200包括:车身1020;连接至所述车身的动力系统1030,用于驱动所述车辆运动;车辆控制系统1010,用于控制所述车辆;以及前述车道线检测系统1000。
图12中仅示例性地示出了车道线检测系统1000和车辆控制系统1010之间的一种关系,需要说明的是,车道线检测系统1000也可以全部地或部分的 集成在车辆控制系统1010中,例如存储器、处理器、图像采集装置、处理装置或传感器等之中的一个、多个或全部的部件均可以集成或归属于车辆控制系统1010中。图12中二者分开示意并不限制二者需是两个单独的系统。
由于本实施例中的各模块能够执行实施例一所示的方法,本实施例未详细描述的部分,可参考对实施例一的相关说明。
此外,本发明实施例提供了一种可读存储介质,其上存储有指令,该指令被处理器执行以实现如实施例一所述的方法。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (26)

  1. 一种车道线检测方法,其特征在于,包括:
    获取包括车道线的图像数据;
    根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果;
    对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述先验信息包括:车道线之间的平行性。
  3. 根据权利要求1所述的方法,其特征在于,所述先验信息还包括如下至少一种:车道线的宽度、长度、间距。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述根据车道线的先验信息对所述图像数据进行处理,得到当前帧图像数据的车道线观测结果,包括:
    根据所述先验信息,对所述图像数据进行骨架提取,得到车道线的骨架数据;
    对所述骨架数据进行筛选,得到所述车道线的特征数据;
    对所述特征数据进行线条拟合,得到当前帧图像数据的所述车道线观测结果。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述先验信息,对所述图像数据进行骨架提取,得到车道线的骨架数据,包括:
    对所述图像数据进行视角转换,得到所述图像数据的俯视数据;
    根据所述先验信息,对所述俯视数据进行骨架提取,得到所述骨架数据。
  6. 根据权利要求4所述的方法,其特征在于,所述对所述骨架数据进行筛选,得到所述车道线的特征数据,包括:
    提取所述骨架数据中的候选特征数据;
    根据所述先验信息,对所述候选特征数据进行聚类分析,得到所述车道线的所述特征数据。
  7. 根据权利要求6所述的方法,其特征在于,通过如下至少一种实现方式,提取所述骨架数据中的候选特征数据:特征线提取与拟合分析。
  8. 根据权利要求1至3任一项所述的方法,其特征在于,所述对所述车 道线观测结果进行历史观测结果关联,得到车道线检测结果之前,所述方法还包括:
    对所述车道线观测结果进行坐标转换,使得转换后的车道线观测结果位于世界坐标系下。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述车道线观测结果进行坐标转换,包括:
    获取车辆的位姿信息;
    结合所述位姿信息,对所述车道线观测结果进行坐标转换。
  10. 根据权利要求9所述的方法,其特征在于,所述获取车辆的位姿信息,包括:
    通过位姿传感器获取得到所述车辆的所述位姿信息。
  11. 根据权利要求10所述的方法,其特征在于,所述位姿传感器包括如下至少一种:惯性测量单元IMU或视觉里程计。
  12. 根据权利要求2或3所述的方法,其特征在于,所述历史观测结果关联方式包括:
    有数据关联,用于结合历史帧图像数据观测对所述车道线观测结果进行数据关联;或者,
    无数据关联,用于在无所述历史帧图像数据观测时,对所述车道线观测结果进行数据关联。
  13. 根据权利要求12所述的方法,其特征在于,利用所述有数据关联对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果,包括:
    在所述车道线观测结果中,获取与所述历史帧图像数据观测之间匹配程度最高的目标车道线观测结果;
    根据所述目标车道线观测结果在所述历史帧图像数据观测中的对应关系,确定所述目标车道线观测结果索引信息,得到所述车道线检测结果;其中,所述索引信息继承所述历史帧图像数据观测的索引序号。
  14. 根据权利要求13所述的方法,其特征在于,所述在所述车道线观测结果中,获取与所述历史帧图像数据观测之间匹配程度最高的目标车道线观测结果,包括:
    在至少两个车道线维度上,分别获取所述历史帧图像数据观测与所述车 道线观测结果之间的代价值,所述代价值用于表征观测数据在各车道线维度上的差别情况;
    根据所述至少两个代价值,构造代价矩阵;
    获取所述代价矩阵的最优解,并将所述最优解对应的车道线观测结果作为所述目标车道线观测结果。
  15. 根据权利要求14所述的方法,其特征在于,所述车道线维度包括如下至少两种:位置、方向和曲率。
  16. 根据权利要求12所述的方法,其特征在于,利用所述无数据关联对所述车道线观测结果进行历史观测结果关联,得到车道线检测结果,包括:
    获取所述车道线观测结果的索引偏移情况;其中,所述索引偏移情况用于指示各单帧观测数据中各车道线之间的索引值偏差情况;
    根据所述索引偏移情况,为所述车道线观测结果重新分配索引信息,得到所述车道线检测结果。
  17. 根据权利要求12所述的方法,其特征在于,所述历史帧图像数据观测包括如下至少一种:车道中心线、任意车道线。
  18. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    对所述车道线检测结果进行拟合优化,得到优化后的车道线检测结果。
  19. 根据权利要求18所述的方法,其特征在于,所述对所述车道线检测结果进行拟合优化,得到优化后的车道线检测结果,包括:
    将所述车道线检测结果与索引信息作为预设拟合优化模型的输入,并获取所述拟合优化模型的输出,得到所述车道线检测结果所指示的车道中心线与车道宽度;
    根据所述车道宽度与所述车道中心线,生成所述优化车道线检测结果。
  20. 根据权利要求1所述的方法,其特征在于,所述车道线检测结果包括至少一段车道线。
  21. 根据权利要求1所述的方法,其特征在于,所述图像数据通过图像采集装置采集得到。
  22. 根据权利要求1所述的方法,其特征在于,所述图像数据为经过视觉识别处理之后得到的。
  23. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    所述车道线检测结果用于使车辆控制系统根据所述车道线检测结果生成车辆控制指令,并根据所述车辆控制指令控制车辆运动。
  24. 一种车道线检测系统,其特征在于,包括:
    车道线检测装置,包括:存储器与处理器;所述存储器用于存储指令,所述处理器用于执行所述指令,并实现如权利要求1至23任一项所述的方法;
    图像采集装置,用于采集初始图像数据,并发送至处理装置;
    所述处理装置,用于对所述初始图像数据进行视觉识别处理,并将视觉识别处理后的所述图像数据发送给所述车道线检测装置;
    位姿传感器,用于采集车辆位姿数据,并将所述位姿数据发送至所述车道线检测结果装置。
  25. 一种车辆,其特征在于,包括:
    车身;
    连接至所述车身的动力系统,用于驱动所述车辆运动;
    车辆控制系统,用于控制所述车辆;以及
    如权利要求24所述的车道线检测系统。
  26. 一种计算机可读存储介质,其特征在于,其上存储有指令,所述指令被处理器执行以实现如权利要求1至23任一项所述的方法。
PCT/CN2019/074962 2019-02-13 2019-02-13 车道线检测方法、装置、系统与车辆、存储介质 WO2020164010A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980005382.8A CN111316284A (zh) 2019-02-13 2019-02-13 车道线检测方法、装置、系统与车辆、存储介质
PCT/CN2019/074962 WO2020164010A1 (zh) 2019-02-13 2019-02-13 车道线检测方法、装置、系统与车辆、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/074962 WO2020164010A1 (zh) 2019-02-13 2019-02-13 车道线检测方法、装置、系统与车辆、存储介质

Publications (1)

Publication Number Publication Date
WO2020164010A1 true WO2020164010A1 (zh) 2020-08-20

Family

ID=71157766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074962 WO2020164010A1 (zh) 2019-02-13 2019-02-13 车道线检测方法、装置、系统与车辆、存储介质

Country Status (2)

Country Link
CN (1) CN111316284A (zh)
WO (1) WO2020164010A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433203A (zh) * 2020-10-29 2021-03-02 同济大学 一种基于毫米波雷达数据的车道线形检测方法
CN112433211A (zh) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 一种位姿确定方法及装置、电子设备和存储介质
CN112906665A (zh) * 2021-04-06 2021-06-04 北京车和家信息技术有限公司 交通标线融合方法、装置、存储介质及电子设备
CN113591730A (zh) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 一种识别车道分组线的方法、装置和设备
CN114166238A (zh) * 2021-12-06 2022-03-11 北京百度网讯科技有限公司 车道线的识别方法、装置及电子设备
CN115049994A (zh) * 2021-02-25 2022-09-13 广州汽车集团股份有限公司 一种车道线检测方法及系统、计算机可读存储介质
CN115223131A (zh) * 2021-11-09 2022-10-21 广州汽车集团股份有限公司 一种自适应巡航的跟随目标车辆检测方法、装置及汽车
CN116258792A (zh) * 2023-03-17 2023-06-13 广州小鹏自动驾驶科技有限公司 虚拟车道构建方法、装置、设备及计算机可读存储介质
CN116385529A (zh) * 2023-04-14 2023-07-04 小米汽车科技有限公司 确定减速带位置的方法、装置、存储介质以及车辆
CN116486354A (zh) * 2022-07-13 2023-07-25 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备以及存储介质
CN117575920A (zh) * 2023-12-01 2024-02-20 昆易电子科技(上海)有限公司 车道线优化方法、装置及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115857B (zh) * 2020-09-17 2024-03-01 福建牧月科技有限公司 智能汽车的车道线识别方法、装置、电子设备及介质
CN112285734B (zh) * 2020-10-30 2023-06-23 北京斯年智驾科技有限公司 基于道钉的港口无人集卡高精度对准方法及其对准系统
CN114581509A (zh) * 2020-12-02 2022-06-03 魔门塔(苏州)科技有限公司 一种目标定位方法及装置
CN113639782A (zh) * 2021-08-13 2021-11-12 北京地平线信息技术有限公司 车载传感器的外参标定方法和装置、设备和介质
CN114644019B (zh) * 2022-05-23 2022-08-02 苏州挚途科技有限公司 车道中心线的确定方法、装置和电子设备
CN115082884A (zh) * 2022-06-15 2022-09-20 广州文远知行科技有限公司 一种车道线检测方法、装置、无人设备及存储介质
CN115272182B (zh) * 2022-06-23 2023-05-26 禾多科技(北京)有限公司 车道线检测方法、装置、电子设备和计算机可读介质
CN114863380B (zh) * 2022-07-05 2022-10-25 高德软件有限公司 车道线识别方法、装置及电子设备
CN115731526B (zh) * 2022-11-21 2023-10-13 禾多科技(北京)有限公司 车道线识别方法、装置、电子设备和计算机可读介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (zh) * 2014-04-01 2014-07-23 西安交通大学 基于单目视觉和惯性导航单元的实时车道线检测系统
CN105701449A (zh) * 2015-12-31 2016-06-22 百度在线网络技术(北京)有限公司 路面上的车道线的检测方法和装置
CN109084782A (zh) * 2017-06-13 2018-12-25 蔚来汽车有限公司 基于摄像头传感器的车道线地图构建方法以及构建系统
CN109186615A (zh) * 2018-09-03 2019-01-11 武汉中海庭数据技术有限公司 基于高精度地图的车道边线距离检测方法、装置及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517111B (zh) * 2013-09-27 2018-09-07 比亚迪股份有限公司 车道线检测方法、系统、车道偏离预警方法及系统
CN108985230A (zh) * 2018-07-17 2018-12-11 深圳市易成自动驾驶技术有限公司 车道线检测方法、装置及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103940434A (zh) * 2014-04-01 2014-07-23 西安交通大学 基于单目视觉和惯性导航单元的实时车道线检测系统
CN105701449A (zh) * 2015-12-31 2016-06-22 百度在线网络技术(北京)有限公司 路面上的车道线的检测方法和装置
CN109084782A (zh) * 2017-06-13 2018-12-25 蔚来汽车有限公司 基于摄像头传感器的车道线地图构建方法以及构建系统
CN109186615A (zh) * 2018-09-03 2019-01-11 武汉中海庭数据技术有限公司 基于高精度地图的车道边线距离检测方法、装置及存储介质

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112433203B (zh) * 2020-10-29 2023-06-20 同济大学 一种基于毫米波雷达数据的车道线形检测方法
CN112433203A (zh) * 2020-10-29 2021-03-02 同济大学 一种基于毫米波雷达数据的车道线形检测方法
CN112433211A (zh) * 2020-11-27 2021-03-02 浙江商汤科技开发有限公司 一种位姿确定方法及装置、电子设备和存储介质
CN112433211B (zh) * 2020-11-27 2022-11-29 浙江商汤科技开发有限公司 一种位姿确定方法及装置、电子设备和存储介质
CN115049994A (zh) * 2021-02-25 2022-09-13 广州汽车集团股份有限公司 一种车道线检测方法及系统、计算机可读存储介质
CN112906665A (zh) * 2021-04-06 2021-06-04 北京车和家信息技术有限公司 交通标线融合方法、装置、存储介质及电子设备
CN113591730A (zh) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 一种识别车道分组线的方法、装置和设备
CN113591730B (zh) * 2021-08-03 2023-11-10 湖北亿咖通科技有限公司 一种识别车道分组线的方法、装置和设备
CN115223131A (zh) * 2021-11-09 2022-10-21 广州汽车集团股份有限公司 一种自适应巡航的跟随目标车辆检测方法、装置及汽车
CN114166238A (zh) * 2021-12-06 2022-03-11 北京百度网讯科技有限公司 车道线的识别方法、装置及电子设备
CN114166238B (zh) * 2021-12-06 2024-02-13 北京百度网讯科技有限公司 车道线的识别方法、装置及电子设备
CN116486354A (zh) * 2022-07-13 2023-07-25 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备以及存储介质
CN116486354B (zh) * 2022-07-13 2024-04-16 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备以及存储介质
CN116258792A (zh) * 2023-03-17 2023-06-13 广州小鹏自动驾驶科技有限公司 虚拟车道构建方法、装置、设备及计算机可读存储介质
CN116385529A (zh) * 2023-04-14 2023-07-04 小米汽车科技有限公司 确定减速带位置的方法、装置、存储介质以及车辆
CN116385529B (zh) * 2023-04-14 2023-12-26 小米汽车科技有限公司 确定减速带位置的方法、装置、存储介质以及车辆
CN117575920A (zh) * 2023-12-01 2024-02-20 昆易电子科技(上海)有限公司 车道线优化方法、装置及存储介质

Also Published As

Publication number Publication date
CN111316284A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
WO2020164010A1 (zh) 车道线检测方法、装置、系统与车辆、存储介质
EP3505866B1 (en) Method and apparatus for creating map and positioning moving entity
CN109100730B (zh) 一种多车协同快速建图方法
CN110796063B (zh) 用于检测车位的方法、装置、设备、存储介质以及车辆
CN106774431B (zh) 一种测绘无人机航线规划方法及装置
WO2020135446A1 (zh) 一种目标定位方法和装置、无人机
US20210158567A1 (en) Visual positioning method and apparatus, electronic device, and system
CN113409459B (zh) 高精地图的生产方法、装置、设备和计算机存储介质
CN110073362A (zh) 用于车道标记检测的系统及方法
CN103413352A (zh) 基于rgbd多传感器融合的场景三维重建方法
KR101261409B1 (ko) 영상 내 노면표시 인식시스템
CN114332360A (zh) 一种协同三维建图方法及系统
CN106548173A (zh) 一种基于分级匹配策略的改进无人机三维信息获取方法
WO2021083151A1 (zh) 目标检测方法、装置、存储介质及无人机
CN114419165B (zh) 相机外参校正方法、装置、电子设备和存储介质
CN106970620A (zh) 一种基于单目视觉的机器人控制方法
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
CN113378605A (zh) 多源信息融合方法及装置、电子设备和存储介质
Zhang et al. Bundle adjustment for monocular visual odometry based on detections of traffic signs
WO2023155580A1 (zh) 一种对象识别方法和装置
CN114565863A (zh) 无人机图像的正射影像实时生成方法、装置、介质及设备
CN109636897B (zh) 一种基于改进RGB-D SLAM的Octomap优化方法
CN110501021A (zh) 一种基于相机和激光雷达融合的里程计估计方法及系统
CN112798020B (zh) 一种用于评估智能汽车定位精度的系统及方法
CN113763504A (zh) 地图更新方法、系统、车载终端、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19915078

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19915078

Country of ref document: EP

Kind code of ref document: A1