CN114863392A - Lane line detection method, device, vehicle and storage medium - Google Patents

Lane line detection method, device, vehicle and storage medium Download PDF

Info

Publication number
CN114863392A
CN114863392A CN202210444361.XA CN202210444361A CN114863392A CN 114863392 A CN114863392 A CN 114863392A CN 202210444361 A CN202210444361 A CN 202210444361A CN 114863392 A CN114863392 A CN 114863392A
Authority
CN
China
Prior art keywords
lane line
feature map
network
sample
line detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210444361.XA
Other languages
Chinese (zh)
Other versions
CN114863392B (en
Inventor
叶航军
蔡锐
赵婕
祝贺
王斌
周珏嘉
邱叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210444361.XA priority Critical patent/CN114863392B/en
Publication of CN114863392A publication Critical patent/CN114863392A/en
Application granted granted Critical
Publication of CN114863392B publication Critical patent/CN114863392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及一种车道线检测方法、装置、车辆及存储介质,该车道线检测方法通过获取目标检测图像;将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置,其中,所述预设车道线检测模型通过包括分割网络和分类网络的预设初始模型训练得到,所述分割网络用于确定车道线检测样本图像中车道线的第一预测位置,所述分类网络用于确定车道线检测样本图像中车道线的第二预测位置,通过该预设车道线检测模型进行车道线检测,能够有效提升车道线检测结果的准确性。

Figure 202210444361

The present disclosure relates to a lane line detection method, a device, a vehicle and a storage medium. The lane line detection method obtains a target detection image and inputs the target detection image into a preset lane line detection model, so that the preset lane The line detection model outputs the target position of the lane line in the target detection image, wherein the preset lane line detection model is obtained by training a preset initial model including a segmentation network and a classification network, and the segmentation network is used to determine the lane line Detecting the first predicted position of the lane line in the sample image, the classification network is used to determine the second predicted position of the lane line in the lane line detection sample image, and using the preset lane line detection model to detect the lane line can effectively improve the lane accuracy of line detection results.

Figure 202210444361

Description

车道线检测方法、装置、车辆及存储介质Lane line detection method, device, vehicle and storage medium

技术领域technical field

本公开涉及智能网联汽车技术领域,尤其涉及一种车道线检测方法、装置、车辆及存储介质。The present disclosure relates to the technical field of intelligent networked vehicles, and in particular, to a lane line detection method, device, vehicle and storage medium.

背景技术Background technique

随着科学技术的不断发展,自动驾驶技术逐渐走向了发展高潮。安全性一直是自动驾驶技术关注的重点,自动驾驶系统的环境感知能力对安全性能起着决定性的作用,而车道线检测则是环境感知中一个关键环节。With the continuous development of science and technology, autonomous driving technology has gradually moved towards a development climax. Safety has always been the focus of autonomous driving technology. The environmental perception capability of the autonomous driving system plays a decisive role in safety performance, and lane line detection is a key link in environmental perception.

在车道线检测过程中,通常会由于光线较暗,反光,存在遮挡,抑或是车道线长时间被雨水冲刷,而导致拍摄到的道路图像中车道线模糊不清,加之车道线所在场景千变万化,因此会导致车道线检测难度大,检测结果准确性较低的问题。In the process of lane line detection, usually due to the dark light, reflection, occlusion, or the lane line being washed by rain for a long time, the lane line in the captured road image is blurred, and the scene where the lane line is located is ever-changing. Therefore, the detection of lane lines is difficult and the accuracy of detection results is low.

发明内容SUMMARY OF THE INVENTION

为克服相关技术中存在的问题,本公开提供一种车道线检测方法、装置、车辆及存储介质。In order to overcome the problems existing in the related art, the present disclosure provides a lane line detection method, device, vehicle and storage medium.

根据本公开实施例的第一方面,提供一种车道线检测方法,所述方法包括:According to a first aspect of the embodiments of the present disclosure, there is provided a lane line detection method, the method comprising:

获取目标检测图像;Get the target detection image;

将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置;inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image;

其中,所述预设车道线检测模型通过以下方式训练得到:Wherein, the preset lane line detection model is obtained by training in the following ways:

获取多个车道线检测样本图像,所述车道线检测图像包括车道线标注位置;acquiring a plurality of lane line detection sample images, where the lane line detection images include lane line marked positions;

将每个所述车道线检测样本图像输入预设初始模型,所述预设初始模型包括分割网络和分类网络,所述分割网络用于确定车道线检测样本图像中车道线的第一预测位置,所述分类网络用于确定车道线检测样本图像中车道线的第二预测位置;inputting each of the lane line detection sample images into a preset initial model, the preset initial model includes a segmentation network and a classification network, and the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, The classification network is used to determine the second predicted position of the lane line in the lane line detection sample image;

根据所述第一预测位置,所述第二预测位置和所述车道线标注位置对所述预设初始模型进行训练,以得到所述预设车道线检测模型。The preset initial model is trained according to the first predicted position, the second predicted position and the lane line marking position to obtain the preset lane line detection model.

可选地,所述预设初始模型还包括特征提取网络,所述特征提取网络与所述分类网络和所述分割网络耦合,所述预设初始模型,用于Optionally, the preset initial model further includes a feature extraction network, the feature extraction network is coupled with the classification network and the segmentation network, and the preset initial model is used for

通过所述特征提取网络获取每个所述车道线检测样本图像对应的多尺度样本特征图,并根据所述多尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain a multi-scale sample feature map corresponding to each of the lane line detection sample images through the feature extraction network, and determine a first sample feature map including global image information according to the multi-scale sample feature map;

通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

通过所述分类网络根据所述多尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置。The second predicted position of the lane line in the lane line detection sample image is determined by the classification network according to the multi-scale sample feature map.

可选地,所述预设车道线检测模型通过以下方式训练得到,Optionally, the preset lane line detection model is obtained by training in the following manner:

通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,根据所述多尺度样本特征图中的最小尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the feature extraction network in the preset initial model, and determine the first feature map including the global image information according to the minimum-scale sample feature map in the multi-scale sample feature map. A sample feature map;

通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

根据所述第一预测位置和所述车道线标注位置确定第一损失函数对应的第一损失值;determining a first loss value corresponding to the first loss function according to the first predicted position and the lane marking position;

通过所述分类网路根据所述最小尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置;Determine the second predicted position of the lane line in the lane line detection sample image according to the minimum scale sample feature map by the classification network;

根据所述第二预测位置和所述车道线标注位置确定第二损失函数对应的第二损失值;determining a second loss value corresponding to the second loss function according to the second predicted position and the lane marking position;

根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值;determining a third loss value corresponding to the third loss function according to the first loss value and the second loss value;

在所述第三损失值大于或者等于预设损失值阈值的情况下,调整所述预设初始模型的模型参数,以得到更新后的目标模型,并再次执行通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,至所述根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值的步骤,直至在确定所述第三损失值小于所述预设损失值阈值的情况下,删除当前所述目标模型中的所述分割网络,以得到所述预设车道线检测模型。In the case that the third loss value is greater than or equal to the preset loss value threshold, adjust the model parameters of the preset initial model to obtain an updated target model, and execute the process of passing the preset initial model again. The feature extraction network obtains the multi-scale sample feature map corresponding to the lane line detection sample image, and the step of determining the third loss value corresponding to the third loss function according to the first loss value and the second loss value, Until it is determined that the third loss value is smaller than the preset loss value threshold, the segmentation network in the current target model is deleted to obtain the preset lane line detection model.

可选地,所述特征提取网络,包括特征金字塔网络和全局特征提取网络,所述特征金字塔网络包括自底向上子网络和自顶向下子网络,所述全局特征提取网络的输入端与所述自底向上子网络的输出端耦合,所述全局特征提取网络的输出端与所述自顶向下子网络的输入端耦合,所述全局特征提取网络的输出端还与所述分割网络的输入端耦合,所述自顶向下子网络的输出端也与所述分割网络的输入端耦合,所述预设初始模型用于,Optionally, the feature extraction network includes a feature pyramid network and a global feature extraction network, the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the input end of the global feature extraction network is the same as the The output end of the bottom-up sub-network is coupled, the output end of the global feature extraction network is coupled with the input end of the top-down sub-network, and the output end of the global feature extraction network is also coupled with the input end of the segmentation network coupling, the output end of the top-down sub-network is also coupled with the input end of the segmentation network, and the preset initial model is used for,

通过所述特征金字塔网络中的自底向上子网络获取所述车道线检测样本图像对应的多尺度样本特征图,并将所述多尺度样本特征图中的最小尺度样本特征图输入所述全局特征提取网络,以使所述全局特征提取网络向所述分割网络和所述自顶向下子网络输出所述第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the bottom-up sub-network in the feature pyramid network, and input the minimum-scale sample feature map in the multi-scale sample feature map into the global feature an extraction network, so that the global feature extraction network outputs the first sample feature map to the segmentation network and the top-down sub-network;

通过所述自顶向下子网络向所述分割网络输入多尺度的第二样本特征图,以使所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置。A multi-scale second sample feature map is input to the segmentation network through the top-down sub-network, so that the segmentation network determines the lane according to the first sample feature map and the second sample feature map The first predicted position of the lane line in the line detection sample image.

可选地,所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置,包括:Optionally, the segmentation network determines the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map, including:

通过所述分割网络对所述第一样本特征图和所述第二样本特征图进行上采样处理,以得到待分割样本特征图;Perform up-sampling processing on the first sample feature map and the second sample feature map through the segmentation network to obtain a sample feature map to be segmented;

按照预设分割方式对所述待分割样本特征图进行分割,以得到多个局部特征图;Segment the feature map of the sample to be segmented according to a preset segmentation method to obtain a plurality of local feature maps;

对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图;Performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps;

根据所述目标样本特征图确定所述车道线检测样本图像中每个像素属于车道线的第一概率;Determine the first probability that each pixel in the lane line detection sample image belongs to the lane line according to the target sample feature map;

根据每个像素的所述第一概率确定所述车道线检测样本图像中车道线的第一预测位置。The first predicted position of the lane line in the lane line detection sample image is determined according to the first probability of each pixel.

可选地,所述按照预设分割方式包括:Optionally, the dividing according to the preset manner includes:

将所述待分割样本特征图划分为H个C*W的局部特征图,将所述待分割样本特征图划分为C个H*W的局部特征图,将所述待分割样本特征图划分为W个H*C的局部特征图中的至少一个,其中,H为待分割样本特征图的层数,C为待分割样本特征图的长度,W为所述待分割样本特征图的宽度。The feature maps of the samples to be divided are divided into H local feature maps of C*W, the feature maps of the samples to be divided are divided into C local feature maps of H*W, and the feature maps of the samples to be divided are divided into At least one of W local feature maps of H*C, where H is the number of layers of the feature map of the sample to be segmented, C is the length of the feature map of the sample to be segmented, and W is the width of the feature map of the sample to be segmented.

可选地,所述对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图,包括:Optionally, performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps, including:

获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图;Obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the specified local feature map after convolution;

将所述指定局部特征图与所述当前局部特征图对应的下一个局部特征图进行拼接后,得到更新后的当前局部特征图;After splicing the specified local feature map and the next local feature map corresponding to the current local feature map, an updated current local feature map is obtained;

确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个;determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定所述当前局部特征图对应的下一个所述局部特征图为非所述多个局部特征图中最后一个的情况下,再次执行获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,至确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个的步骤;In the case where it is determined that the next local feature map corresponding to the current local feature map is not the last one of the multiple local feature maps, obtain the current local feature map again, and perform convolution on the current local feature map operation, to obtain the specified local feature map after convolution, to the step of determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定当前局部特征图对应的下一个所述局部特征图为所述多个局部特征图中最后一个的情况下,获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,并将所述当前局部特征图对应的所述指定局部特征图作为所述目标样本特征图。In the case where it is determined that the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps, obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain a volume The specified local feature map after product is used, and the specified local feature map corresponding to the current local feature map is used as the target sample feature map.

可选地,所述自底向上子网络的输出端与所述分类网络的输入端耦合,所述预设初始模型,用于:Optionally, the output end of the bottom-up sub-network is coupled to the input end of the classification network, and the preset initial model is used for:

通过所述自底向上子网络向所述分类网络输入最小尺度样本特征图;Input the minimum scale sample feature map to the classification network through the bottom-up sub-network;

通过所述分类网络将所述最小尺度样本特征图划分为多个锚框,确定每个锚框属于所述车道线的第二概率,并根据所述第二概率确定所述车道线检测样本图像中车道线的第二预测位置。The minimum-scale sample feature map is divided into a plurality of anchor boxes by the classification network, a second probability that each anchor box belongs to the lane line is determined, and the lane line detection sample image is determined according to the second probability The second predicted location of the center lane line.

可选地,将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置,包括:Optionally, inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image, including:

通过所述特征提取网络获取所述目标检测图像对应的目标特征图;Obtain the target feature map corresponding to the target detection image through the feature extraction network;

将所述目标特征图输入所述分类网络,以得到所述分类网络输出的所述目标检测图像中车道线的所述目标位置。The target feature map is input into the classification network to obtain the target position of the lane line in the target detection image output by the classification network.

根据本公开实施例的第二方面,提供一种车道线检测装置,所述装置包括:According to a second aspect of the embodiments of the present disclosure, there is provided a lane line detection device, the device comprising:

获取模块,被配置为获取目标检测图像;an acquisition module, configured to acquire a target detection image;

确定模块,被配置为将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置;a determination module, configured to input the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image;

其中,所述预设车道线检测模型通过以下方式训练得到:Wherein, the preset lane line detection model is obtained by training in the following ways:

获取多个车道线检测样本图像,所述车道线检测图像包括车道线标注位置;acquiring a plurality of lane line detection sample images, where the lane line detection images include lane line marked positions;

将每个所述车道线检测样本图像输入预设初始模型,所述预设初始模型包括分割网络和分类网络,所述分割网络用于确定车道线检测样本图像中车道线的第一预测位置,所述分类网络用于确定车道线检测样本图像中车道线的第二预测位置;inputting each of the lane line detection sample images into a preset initial model, the preset initial model includes a segmentation network and a classification network, and the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, The classification network is used to determine the second predicted position of the lane line in the lane line detection sample image;

根据所述第一预测位置,所述第二预测位置和所述车道线标注位置对所述预设初始模型进行训练,以得到所述预设车道线检测模型。The preset initial model is trained according to the first predicted position, the second predicted position and the lane line marking position to obtain the preset lane line detection model.

可选地,所述预设初始模型还包括特征提取网络,所述特征提取网络与所述分类网络和所述分割网络耦合,所述预设初始模型,用于Optionally, the preset initial model further includes a feature extraction network, the feature extraction network is coupled with the classification network and the segmentation network, and the preset initial model is used for

通过所述特征提取网络获取每个所述车道线检测样本图像对应的多尺度样本特征图,并根据所述多尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain a multi-scale sample feature map corresponding to each of the lane line detection sample images through the feature extraction network, and determine a first sample feature map including global image information according to the multi-scale sample feature map;

通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

通过所述分类网络根据所述多尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置。The second predicted position of the lane line in the lane line detection sample image is determined by the classification network according to the multi-scale sample feature map.

可选地,所述预设车道线检测模型通过以下方式训练得到,Optionally, the preset lane line detection model is obtained by training in the following manner:

通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,根据所述多尺度样本特征图中的最小尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the feature extraction network in the preset initial model, and determine the first feature map including the global image information according to the minimum-scale sample feature map in the multi-scale sample feature map. A sample feature map;

通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

根据所述第一预测位置和所述车道线标注位置确定第一损失函数对应的第一损失值;determining a first loss value corresponding to the first loss function according to the first predicted position and the lane marking position;

通过所述分类网路根据所述最小尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置;Determine the second predicted position of the lane line in the lane line detection sample image according to the minimum scale sample feature map by the classification network;

根据所述第二预测位置和所述车道线标注位置确定第二损失函数对应的第二损失值;determining a second loss value corresponding to the second loss function according to the second predicted position and the lane marking position;

根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值;determining a third loss value corresponding to the third loss function according to the first loss value and the second loss value;

在所述第三损失值大于或者等于预设损失值阈值的情况下,调整所述预设初始模型的模型参数,以得到更新后的目标模型,并再次执行通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,至所述根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值的步骤,直至在确定所述第三损失值小于所述预设损失值阈值的情况下,删除当前所述目标模型中的所述分割网络,以得到所述预设车道线检测模型。In the case that the third loss value is greater than or equal to the preset loss value threshold, adjust the model parameters of the preset initial model to obtain an updated target model, and execute the process of passing the preset initial model again. The feature extraction network obtains the multi-scale sample feature map corresponding to the lane line detection sample image, and the step of determining the third loss value corresponding to the third loss function according to the first loss value and the second loss value, Until it is determined that the third loss value is smaller than the preset loss value threshold, the segmentation network in the current target model is deleted to obtain the preset lane line detection model.

可选地,所述特征提取网络,包括特征金字塔网络和全局特征提取网络,所述特征金字塔网络包括自底向上子网络和自顶向下子网络,所述全局特征提取网络的输入端与所述自底向上子网络的输出端耦合,所述全局特征提取网络的输出端与所述自顶向下子网络的输入端耦合,所述全局特征提取网络的输出端还与所述分割网络的输入端耦合,所述自顶向下子网络的输出端也与所述分割网络的输入端耦合,所述预设初始模型用于,Optionally, the feature extraction network includes a feature pyramid network and a global feature extraction network, the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the input end of the global feature extraction network is the same as the The output end of the bottom-up sub-network is coupled, the output end of the global feature extraction network is coupled with the input end of the top-down sub-network, and the output end of the global feature extraction network is also coupled with the input end of the segmentation network coupling, the output end of the top-down sub-network is also coupled with the input end of the segmentation network, and the preset initial model is used for,

通过所述特征金字塔网络中的自底向上子网络获取所述车道线检测样本图像对应的多尺度样本特征图,并将所述多尺度样本特征图中的最小尺度样本特征图输入所述全局特征提取网络,以使所述全局特征提取网络向所述分割网络和所述自顶向下子网络输出所述第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the bottom-up sub-network in the feature pyramid network, and input the minimum-scale sample feature map in the multi-scale sample feature map into the global feature an extraction network, so that the global feature extraction network outputs the first sample feature map to the segmentation network and the top-down sub-network;

通过所述自顶向下子网络向所述分割网络输入多尺度的第二样本特征图,以使所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置。A multi-scale second sample feature map is input to the segmentation network through the top-down sub-network, so that the segmentation network determines the lane according to the first sample feature map and the second sample feature map The first predicted position of the lane line in the line detection sample image.

可选地,所述通过所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置,包括:Optionally, the determining, by the segmentation network, the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map includes:

通过所述分割网络对所述第一样本特征图和所述第二样本特征图进行上采样处理,以得到待分割样本特征图;Perform up-sampling processing on the first sample feature map and the second sample feature map through the segmentation network to obtain a sample feature map to be segmented;

按照预设分割方式对所述待分割样本特征图进行分割,以得到多个局部特征图;Segment the feature map of the sample to be segmented according to a preset segmentation method to obtain a plurality of local feature maps;

对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图;Performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps;

根据所述目标样本特征图确定所述车道线检测样本图像中每个像素属于车道线的第一概率;Determine the first probability that each pixel in the lane line detection sample image belongs to the lane line according to the target sample feature map;

根据每个像素的所述第一概率确定所述车道线检测样本图像中车道线的第一预测位置。The first predicted position of the lane line in the lane line detection sample image is determined according to the first probability of each pixel.

可选地,所述按照预设分割方式包括:Optionally, the dividing according to the preset manner includes:

将所述待分割样本特征图划分为H个C*W的局部特征图,将所述待分割样本特征图划分为C个H*W的局部特征图,将所述待分割样本特征图划分为W个H*C的局部特征图中的至少一个,其中,H为待分割样本特征图的层数,C为待分割样本特征图的长度,W为所述待分割样本特征图的宽度。The feature maps of the samples to be divided are divided into H local feature maps of C*W, the feature maps of the samples to be divided are divided into C local feature maps of H*W, and the feature maps of the samples to be divided are divided into At least one of W local feature maps of H*C, where H is the number of layers of the feature map of the sample to be segmented, C is the length of the feature map of the sample to be segmented, and W is the width of the feature map of the sample to be segmented.

可选地,所述对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图,包括:Optionally, performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps, including:

获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图;Obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the specified local feature map after convolution;

将所述指定局部特征图与所述当前局部特征图对应的下一个局部特征图进行拼接后,得到更新后的当前局部特征图;After splicing the specified local feature map and the next local feature map corresponding to the current local feature map, an updated current local feature map is obtained;

确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个;determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定所述当前局部特征图对应的下一个所述局部特征图为非所述多个局部特征图中最后一个的情况下,再次执行获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,至确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个的步骤;In the case where it is determined that the next local feature map corresponding to the current local feature map is not the last one of the multiple local feature maps, obtain the current local feature map again, and perform convolution on the current local feature map operation, to obtain the specified local feature map after convolution, to the step of determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定当前局部特征图对应的下一个所述局部特征图为所述多个局部特征图中最后一个的情况下,获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,并将所述当前局部特征图对应的所述指定局部特征图作为所述目标样本特征图。In the case where it is determined that the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps, obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain a volume The specified local feature map after product is used, and the specified local feature map corresponding to the current local feature map is used as the target sample feature map.

可选地,所述自底向上子网络的输出端与所述分类网络的输入端耦合,所述预设初始模型,用于:Optionally, the output end of the bottom-up sub-network is coupled to the input end of the classification network, and the preset initial model is used for:

通过所述自底向上子网络向所述分类网络输入最小尺度样本特征图;Input the minimum scale sample feature map to the classification network through the bottom-up sub-network;

通过所述分类网络将所述最小尺度样本特征图划分为多个锚框,确定每个锚框属于所述车道线的第二概率,并根据所述第二概率确定所述车道线检测样本图像中车道线的第二预测位置。The minimum-scale sample feature map is divided into a plurality of anchor boxes by the classification network, a second probability that each anchor box belongs to the lane line is determined, and the lane line detection sample image is determined according to the second probability The second predicted location of the center lane line.

可选地,所述确定模块,被配置为:Optionally, the determining module is configured to:

通过所述特征提取网络获取所述目标检测图像对应的目标特征图;Obtain the target feature map corresponding to the target detection image through the feature extraction network;

将所述目标特征图输入所述分类网络,以得到所述分类网络输出的所述目标检测图像中车道线的所述目标位置。The target feature map is input into the classification network to obtain the target position of the lane line in the target detection image output by the classification network.

根据本公开实施例的第三方面,提供一种车辆,所述车辆包括以上第二方面所述的车道线检测装置。According to a third aspect of the embodiments of the present disclosure, there is provided a vehicle including the lane line detection device described in the second aspect above.

根据本公开实施例的第四方面,提供一种车道线检测装置,包括:According to a fourth aspect of the embodiments of the present disclosure, there is provided a lane line detection device, including:

存储器,其上存储有计算机程序;a memory on which a computer program is stored;

处理器,用于执行所述存储器中的所述计算机程序,以实现以上第一方面所述方法的步骤。A processor for executing the computer program in the memory to implement the steps of the method in the first aspect above.

根据本公开实施例的第五方面,提供一种计算机可读存储介质,其上存储有计算机程序指令,该程序指令被处理器执行时实现本公开第一方面所述方法的步骤。According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having computer program instructions stored thereon, the program instructions implementing the steps of the method described in the first aspect of the present disclosure when the program instructions are executed by a processor.

本公开的实施例提供的技术方案可以包括以下有益效果:通过获取目标检测图像;将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置,其中,所述预设车道线检测模型是通过包括分割网络和分类网络的预设初始模型训练得到,通过该预设车道线检测模型进行车道线检测,能够有效提升车道线检测结果的准确性。The technical solutions provided by the embodiments of the present disclosure may include the following beneficial effects: obtaining a target detection image; inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target Detecting the target position of the lane line in the image, wherein the preset lane line detection model is obtained by training a preset initial model including a segmentation network and a classification network, and performing lane line detection through the preset lane line detection model can effectively Improve the accuracy of lane line detection results.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.

图1是本公开一示例性实施例示出的一种车道线检测方法的流程图;FIG. 1 is a flowchart of a lane line detection method according to an exemplary embodiment of the present disclosure;

图2是本公开一示例性实施示出的一种预设初始模型的模型结构框图;2 is a block diagram of a model structure of a preset initial model shown in an exemplary implementation of the present disclosure;

图3是本公开一示例性实施例示出的分割网络的原理示意图;FIG. 3 is a schematic diagram of the principle of dividing a network according to an exemplary embodiment of the present disclosure;

图4是本公开一示例性实施例示出的一种预设车道线检测模型的训练流程图;FIG. 4 is a training flow chart of a preset lane line detection model according to an exemplary embodiment of the present disclosure;

图5是根据图4所示实施例示出的一种车道线检测方法的流程图;5 is a flowchart of a method for detecting lane lines according to the embodiment shown in FIG. 4;

图6是本公开一示例性实施例示出的一种车道线检测装置的框图;FIG. 6 is a block diagram of a lane line detection apparatus according to an exemplary embodiment of the present disclosure;

图7是根据一示例性实施例示出的一种车道线检测装置的框图。Fig. 7 is a block diagram of a lane line detection apparatus according to an exemplary embodiment.

具体实施方式Detailed ways

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.

需要说明的是,本申请中所有获取信号、信息或数据的动作都是在遵照所在地国家相应的数据保护法规政策的前提下,并获得由相应装置所有者给予授权的情况下进行的。It should be noted that all the actions of obtaining signals, information or data in this application are carried out under the premise of complying with the corresponding data protection regulations and policies of the local country and with the authorization of the corresponding device owner.

在详细介绍本公开的具体实施方式之前,首先,对本公开的应用场景进行以下说明,本公开可以应用于车道线检测过程中,该车道线检测可以是自动驾驶、无人驾驶或路线导航过程中感知驾驶环境时进行车道识别的一个重要环节。目前车道识别的方法主要分为两类,基于图像特征的方法和基于模型的方法,基于图像特征识别的原理是利用了车道标志线边缘与道路图像周围路况之间特征的差异,通常的特征差异有图像的纹理、边缘几何形状、边界的连续性、感兴趣区域的灰度值及对比度等,可以使用阈值分割等方法提取图像特征,算法简单易实现,缺点是易受噪声、树木的阴影遮挡、光照变化、标志线间断不连续等因素干扰而导致车道线检测结果的准确性较低,车道识别结果可靠性较差的问题。基于模型匹配的原理是针对结构化道路的几何特征,建立相应的车道线模型,辨识道路模型参数,进而识别出车道标志线。由于图像特征法的局限性,在实际研究中一般将图像特征法和模型匹配法相结合,从而建立适合的车道线检测模型,这样一方面可以提高车道线识别的准确度,另一方面可以加强系统的适应性,然而,目前没有较完善的模型能够在各种检测场景下都可以保证车道线检测结果的准确性。Before introducing the specific embodiments of the present disclosure in detail, first, the application scenarios of the present disclosure are described below. The present disclosure can be applied to the process of lane line detection, and the lane line detection can be in the process of automatic driving, unmanned driving or route navigation. An important part of lane recognition when perceiving the driving environment. At present, lane recognition methods are mainly divided into two categories, image feature-based methods and model-based methods. The principle of image feature-based recognition is to use the difference between the characteristics of the edge of the lane marking line and the surrounding road conditions of the road image. The usual feature difference There are image texture, edge geometry, boundary continuity, gray value and contrast of the region of interest, etc., and threshold segmentation and other methods can be used to extract image features. The algorithm is simple and easy to implement, but the disadvantage is that it is easily blocked by noise and shadows of trees. , illumination changes, discontinuous marking lines and other factors interfere with the low accuracy of the lane line detection results and the poor reliability of the lane recognition results. The principle based on model matching is to establish the corresponding lane line model according to the geometric features of the structured road, identify the parameters of the road model, and then identify the lane markings. Due to the limitations of the image feature method, in practical research, the image feature method and the model matching method are generally combined to establish a suitable lane line detection model, which can improve the accuracy of lane line recognition on the one hand, and strengthen the system on the other hand. However, there is currently no perfect model that can guarantee the accuracy of lane line detection results in various detection scenarios.

本公开通过提供一种车道线检测方法、装置、车辆及存储介质,该车道线检测方法通过获取目标检测图像;将该目标检测图像输入预设车道线检测模型中,以使该预设车道线检测模型输出该目标检测图像中车道线的目标位置,其中,该预设车道线检测模型是通过包括分割网络和分类网络的预设初始模型训练,通过该预设车道线检测模型进行车道线检测,能够有效提升各种检测场景中车道线检测结果的准确性。The present disclosure provides a lane line detection method, device, vehicle and storage medium. The lane line detection method obtains a target detection image and inputs the target detection image into a preset lane line detection model, so that the preset lane line The detection model outputs the target position of the lane line in the target detection image, wherein the preset lane line detection model is trained by a preset initial model including a segmentation network and a classification network, and the lane line detection is performed through the preset lane line detection model , which can effectively improve the accuracy of lane line detection results in various detection scenarios.

下面结合具体实施例对本公开的技术方案进行详细阐述。The technical solutions of the present disclosure will be described in detail below with reference to specific embodiments.

图1是本公开一示例性实施例示出的一种车道线检测方法的流程图;如图1所示,该车道线检测方法可以包括:FIG. 1 is a flowchart of a method for detecting lane lines according to an exemplary embodiment of the present disclosure; as shown in FIG. 1 , the method for detecting lane lines may include:

步骤101,获取目标检测图像。Step 101, acquiring a target detection image.

其中,该目标检测图像可以是各种车道线检测场景(例如直线车道线的检测场景,双曲线车道线的检测场景,B样条曲线车道线的检测场景等)中的任一待检测图像,该待检测图像中可以包括待检测的车道线。Wherein, the target detection image may be any image to be detected in various lane line detection scenarios (for example, the detection scene of straight lane lines, the detection scene of hyperbolic lane lines, the detection scene of B-spline curve lane lines, etc.), The to-be-detected image may include to-be-detected lane lines.

步骤102,将该目标检测图像输入预设车道线检测模型中,以使该预设车道线检测模型输出该目标检测图像中车道线的目标位置。Step 102: Input the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image.

本步骤中,该预设车道线检测模型通过以下S11至S13所示方法训练得到:In this step, the preset lane line detection model is obtained by training the following methods shown in S11 to S13:

S11,获取多个车道线检测样本图像。S11, acquiring multiple lane line detection sample images.

其中,该车道线检测图像包括车道线标注位置。Wherein, the lane line detection image includes the lane line marked position.

S12,将每个该车道线检测样本图像输入预设初始模型,该预设初始模型包括分割网络和分类网络,该分割网络用于确定车道线检测样本图像中车道线的第一预测位置,该分类网络用于确定车道线检测样本图像中车道线的第二预测位置。S12, input each of the lane line detection sample images into a preset initial model, where the preset initial model includes a segmentation network and a classification network, where the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, the The classification network is used to determine the second predicted location of the lane line in the lane line detection sample image.

其中,该分割网络可以包括多个卷积层和分类层,不同卷积层对应的卷积核数量不同,该分割网络用于将车道线检测样本图像对应的特征图划分为多个局部特征图,然后对该多个局部特征图进行卷积和拼接处理,以得到目标样本特征图,再将该目标样本特征图输入分类层,以使该分类层输出该车道线检测样本图像中车道线的第二预测位置。The segmentation network may include multiple convolution layers and classification layers, and the number of convolution kernels corresponding to different convolution layers is different. The segmentation network is used to divide the feature map corresponding to the lane line detection sample image into multiple local feature maps , and then perform convolution and splicing processing on the multiple local feature maps to obtain the target sample feature map, and then input the target sample feature map into the classification layer, so that the classification layer outputs the lane line detection sample image of the lane line in the sample image. Second predicted location.

需要说明的是,该分割网络通过特征图划分,对划分得到的局部特征图进行卷积操作能够有效获取更多的特征图细节,再对包含更多细节的特征图进行分类操作,能够有效提升分类结果的准确性,进而能够提升车道线检测结果的准确性,该分割网络对特征图的划分方式可以是从特征图的不同方向将特征图切割为多个局部特征图,例如,可以将一个H*C*W的特征图切割为,H个C*W的局部特征图,C个H*W的局部特征图,或者W个H*C的局部特征图,其中,H为特征图的层数,C为特征图的长度,W为特征图的宽度。另外该分割网络中的该分类层可以是全连接层,也可以是其他用于分类的网络模块,现有技术中具有分类功能的网络模块较多,本公开对此不作限定。It should be noted that the segmentation network is divided by feature maps, and the convolution operation on the divided local feature maps can effectively obtain more feature map details, and then classify the feature maps containing more details, which can effectively improve the performance. The accuracy of the classification results can further improve the accuracy of the lane line detection results. The segmentation network can divide the feature map into multiple local feature maps from different directions of the feature map. For example, a The feature map of H*C*W is cut into H local feature maps of C*W, C local feature maps of H*W, or W local feature maps of H*C, where H is the layer of the feature map number, C is the length of the feature map, and W is the width of the feature map. In addition, the classification layer in the segmentation network may be a fully connected layer, or may be other network modules for classification. There are many network modules with classification functions in the prior art, which are not limited in the present disclosure.

S13,根据该第一预测位置,该第二预测位置和该车道线标注位置对该预设初始模型进行训练,以得到该预设车道线检测模型。S13: Train the preset initial model according to the first predicted position, the second predicted position and the lane marking position to obtain the preset lane line detection model.

需要说明的是,在训练过程中,该预设车道线检测模型是通过包括该分类网络和该分割网络的预设初始模型训练得到,但在应用过程中,该预设车道线检测模型可以包括该分割网络和该分类网络,也可以包括该分类网络,不包括分割网络,还可以包括分割网络,不包括该分类网络。It should be noted that, in the training process, the preset lane line detection model is obtained by training a preset initial model including the classification network and the segmentation network, but in the application process, the preset lane line detection model may include The segmentation network and the classification network may also include the classification network, excluding the segmentation network, or may include the segmentation network, excluding the classification network.

以上技术方案,通过获取目标检测图像;将该目标检测图像输入预设车道线检测模型中,以使该预设车道线检测模型输出该目标检测图像中车道线的目标位置,其中,该预设车道线检测模型是通过包括分割网络和分类网络的预设初始模型训练得到,通过该预设车道线检测模型进行车道线检测,能够有效提升各种检测场景中车道线检测结果的准确性。In the above technical solution, by acquiring a target detection image; inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image, wherein the preset lane line detection model The lane line detection model is obtained by training a preset initial model including a segmentation network and a classification network. Using the preset lane line detection model for lane line detection can effectively improve the accuracy of lane line detection results in various detection scenarios.

图2是本公开一示例性实施示出的一种预设初始模型的模型结构框图,如图2所示,该预设初始模型还包括特征提取网络,该特征提取网络与该分类网络和该分割网络耦合,该预设初始模型,用于:FIG. 2 is a block diagram of a model structure of a preset initial model shown in an exemplary implementation of the present disclosure. As shown in FIG. 2 , the preset initial model further includes a feature extraction network, the feature extraction network, the classification network and the Split network coupling, the preset initial model for:

通过该特征提取网络获取每个该车道线检测样本图像对应的多尺度样本特征图,并根据该多尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain a multi-scale sample feature map corresponding to each lane line detection sample image through the feature extraction network, and determine a first sample feature map including global image information according to the multi-scale sample feature map;

通过该分割网络根据该第一样本特征图确定该车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

通过该分类网络根据该多尺度样本特征图确定该车道线检测样本图像中车道线的第二预测位置。The second predicted position of the lane line in the lane line detection sample image is determined by the classification network according to the multi-scale sample feature map.

其中,该特征提取网络,可以包括特征金字塔网络和全局特征提取网络,该特征金字塔网络包括自底向上子网络和自顶向下子网络,该全局特征提取网络可以是Transformer网络中的Encoder(编码器),该全局特征提取网络的输入端与该自底向上子网络的输出端耦合,该全局特征提取网络的输出端与该自顶向下子网络的输入端耦合,该全局特征提取网络的输出端还与该分割网络的输入端耦合,该自顶向下子网络的输出端也与该分割网络的输入端耦合,该自底向上子网络的输出端与该分类网络的输入端耦合,该预设初始模型用于:Wherein, the feature extraction network may include a feature pyramid network and a global feature extraction network, the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the global feature extraction network may be the Encoder (encoder) in the Transformer network. ), the input of the global feature extraction network is coupled with the output of the bottom-up sub-network, the output of the global feature extraction network is coupled with the input of the top-down sub-network, and the output of the global feature extraction network Also coupled with the input of the segmentation network, the output of the top-down sub-network is also coupled with the input of the segmentation network, the output of the bottom-up sub-network is coupled with the input of the classification network, the preset The initial model is used to:

通过该特征金字塔网络中的自底向上子网络获取该车道线检测样本图像对应的多尺度样本特征图(如图2中的特征图x1、特征图x2、特征图x3、特征图x4),并将该多尺度样本特征图中的最小尺度样本特征图(如图2中的特征图x4)输入该全局特征提取网络,以使该全局特征提取网络向该分割网络和该自顶向下子网络输出该第一样本特征图(如图2中,通过全局特征提取网络将特征图x4转换成特征图X1);The multi-scale sample feature map corresponding to the lane line detection sample image is obtained through the bottom-up sub-network in the feature pyramid network (feature map x 1 , feature map x 2 , feature map x 3 , feature map x in Figure 2 ) 4 ), and input the minimum-scale sample feature map in the multi-scale sample feature map (feature map x 4 in Figure 2) into the global feature extraction network, so that the global feature extraction network can contribute to the segmentation network and the self The top-down sub-network outputs the first sample feature map (as shown in Figure 2, the feature map x 4 is converted into a feature map X 1 through the global feature extraction network);

通过该自顶向下子网络向该分割网络输入多尺度的第二样本特征图(如图2中的特征图X1、特征图X2、特征图X3、特征图X4),以使该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置。Input multi-scale second sample feature maps (such as feature map X 1 , feature map X 2 , feature map X 3 , feature map X 4 in FIG. 2 ) to the segmentation network through the top-down sub-network, so that the The segmentation network determines the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map.

需要说明的是,特征金字塔网络中,位于低层(浅层)的特征图具有丰富的细节信息,位于高层(深层)的特征图具有丰富的语义信息,该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置的实施过程可以包括以下S21至S25所示的步骤:It should be noted that, in the feature pyramid network, the feature map located in the low layer (shallow layer) has rich detailed information, and the feature map located in the high layer (deep layer) has rich semantic information. The segmentation network is based on the first sample feature map. The implementation process of determining the first predicted position of the lane line in the lane line detection sample image with the second sample feature map may include the steps shown in the following S21 to S25:

S21,通过该分割网络对该第一样本特征图和该第二样本特征图进行上采样处理,以得到待分割样本特征图。S21 , performing up-sampling processing on the first sample feature map and the second sample feature map through the segmentation network to obtain a sample feature map to be segmented.

示例地,对该多尺度的第二样本特征图可以包括如图2中的特征图X1、特征图X2、特征图X3和特征图X4,对该特征图X1,特征图X2和该特征图X3分别进行上采样,以得到与该特征图X4尺寸相同的特征图,并将该特征图X1,特征图X2和该特征图X3对应的与该特征图X4尺寸相同的特征图与该特征图X4进行concat(拼接)处理,从而得到该待分割样本特征图。For example, the multi-scale second sample feature map may include a feature map X 1 , a feature map X 2 , a feature map X 3 and a feature map X 4 as shown in FIG. 2 . For the feature map X 1 , the feature map X 2 and the feature map X 3 are respectively upsampled to obtain a feature map with the same size as the feature map X 4 , and the feature map X 1 , the feature map X 2 and the feature map X 3 corresponding to the feature map The feature map with the same size as X 4 is subjected to concat (splicing) processing with the feature map X 4 to obtain the feature map of the sample to be segmented.

S22,按照预设分割方式对该待分割样本特征图进行分割,以得到多个局部特征图。S22, segment the feature map of the sample to be segmented according to a preset segmentation method to obtain a plurality of local feature maps.

其中,该按照预设分割方式包括:将该待分割样本特征图划分为H个C*W的局部特征图,将该待分割样本特征图划分为C个H*W的局部特征图,将该待分割样本特征图划分为W个H*C的局部特征图中的至少一个,其中,H为待分割样本特征图的层数,C为待分割样本特征图的长度,W为该待分割样本特征图的宽度。The pre-set segmentation method includes: dividing the feature map of the sample to be divided into H local feature maps of C*W, dividing the feature map of the sample to be divided into C local feature maps of H*W, and dividing the feature map of the sample to be divided into C local feature maps of H*W. The feature map of the sample to be divided is divided into at least one of W local feature maps of H*C, where H is the number of layers of the feature map of the sample to be divided, C is the length of the feature map of the sample to be divided, and W is the sample to be divided. The width of the feature map.

示例地,如图3所示,图3是本公开一示例性实施例示出的分割网络的原理示意图,分别从四个方向进行分割,即将H*C*W的待分割样本特征图看作一个立方体,沿着该立方体的高(对应H)从上至下,将H*C*W的特征图划分为H个C*W的局部特征图(如图3中的SCNN_D),再沿着该立方体的高从下至上,将H*C*W的特征图划分为另外H个C*W的局部特征图(如图3中的SCNN_U),沿着该立方体的宽(对应W)从左至右,将H*C*W的特征图划分为W个H*C的局部特征图(如图3中的SCNN_R),沿着该立方体的宽(对应W)从右至左,将H*C*W的特征图划分为另外W个H*C的局部特征图(如图3中的SCNN_L)。For example, as shown in FIG. 3, FIG. 3 is a schematic diagram of the principle of a segmentation network shown in an exemplary embodiment of the present disclosure. Segmentation is performed from four directions, that is, the H*C*W feature map of the sample to be segmented is regarded as a Cube, along the height of the cube (corresponding to H) from top to bottom, divide the H*C*W feature map into H C*W local feature maps (SCNN_D in Figure 3), and then follow the The height of the cube is from bottom to top, and the feature map of H*C*W is divided into another H local feature maps of C*W (SCNN_U in Figure 3), along the width of the cube (corresponding to W) from left to Right, divide the feature map of H*C*W into W local feature maps of H*C (SCNN_R in Figure 3), along the width of the cube (corresponding to W) from right to left, divide H*C The feature map of *W is divided into another W local feature maps of H*C (SCNN_L in Figure 3).

S23,对该多个局部特征图进行卷积和拼接处理,以得到目标样本特征图。S23, performing convolution and splicing processing on the plurality of local feature maps to obtain a target sample feature map.

本步骤中一种可能的实施方式为:A possible implementation in this step is:

获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图;将该指定局部特征图与该当前局部特征图对应的下一个局部特征图进行拼接后,得到更新后的当前局部特征图;确定该当前局部特征图对应的下一个该局部特征图是否为该多个局部特征图中最后一个;在确定该当前局部特征图对应的下一个该局部特征图为非该多个局部特征图中最后一个的情况下,再次执行获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,至确定该当前局部特征图对应的下一个该局部特征图是否为该多个局部特征图中最后一个的步骤;在确定当前局部特征图对应的下一个该局部特征图为该多个局部特征图中最后一个的情况下,获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,并将该当前局部特征图对应的该指定局部特征图作为该目标样本特征图。Obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the specified local feature map after convolution; after splicing the specified local feature map with the next local feature map corresponding to the current local feature map , obtain the updated current local feature map; determine whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps; determine the next local feature map corresponding to the current local feature map If the picture is not the last one of the multiple local feature maps, perform the acquisition of the current local feature map again, and perform the convolution operation on the current local feature map to obtain the convolved specified local feature map, until the current local feature map is determined. The step of determining whether the next local feature map corresponding to the local feature map is the last one of the multiple local feature maps; after determining that the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps In this case, obtain the current local feature map, and perform convolution operation on the current local feature map to obtain the specified local feature map after convolution, and use the specified local feature map corresponding to the current local feature map as the target sample feature. picture.

示例地,仍以图3所示示例为例,在得到从上至下的H个C*W的局部特征图(例如为C*W-1,C*W-2至C*W-H)之后,可以先对局部特征图C*W-1进行卷积操作(卷积核为1*1),然后将局部特征图C*W-1对应的卷积结果(C*W-1对应的指定局部特征图)与该局部特征图C*W-2拼接,以得到拼接后的当前局部特征图,然后对该当前局部特征图进行卷积操作(卷积核为2个1*1),以得到该C*W-2对应的指定局部特征图,将局部特征图C*W-2对应的指定局部特征图与该局部特征图C*W-3拼接,以得到更新后的当前局部特征图,如此循环直至得到该C*W-H对应的指定局部特征图,其中,该C*W-H对应的指定局部特征图的维度为H*C*W,然后,将C*W-H对应的指定局部特征图(也是一个H*C*W的特征图)沿着高的方向,进行从下至上切片,以得到从下至上的H个C*W的局部特征图(例如为C*W+1,C*W+2至C*W+H),针对该C*W+1,C*W+2至C*W+H再进行卷积拼接操作,以得到C*W+H对应的指定局部特征图(维度为H*C*W),再对该C*W+H对应的指定局部特征图沿着该立方体的宽(对应W)从左至右,将H*C*W的特征图划分为W个H*C的局部特征图,以得到从左至右的W个H*C的局部特征图(例如为H*C-1,H*C-2至H*C-W),对该H*C-1,H*C-2至H*C-W执行预设卷积拼接操作,以得到H*C-W对应的指定局部特征图(维度为H*C*W),再对该H*C-W对应的指定局部特征图沿着该立方体的宽(对应W)从右至左,将H*C*W的特征图划分从右至左的局部特征图(例如为H*C+1,H*C+2至H*C+W),对该H*C+1,H*C+2至H*C+W执行预设卷积拼接操作,得到该H*C+W对应的指定局部特征图,将该H*C+W对应的指定局部特征图作为该目标样本特征图。Illustratively, still taking the example shown in FIG. 3 as an example, after obtaining H local feature maps of C*W from top to bottom (for example, C*W-1, C*W-2 to C*W-H), The convolution operation can be performed on the local feature map C*W-1 (the convolution kernel is 1*1), and then the convolution result corresponding to the local feature map C*W-1 (the specified local corresponding to C*W-1) feature map) and the local feature map C*W-2 to obtain the current local feature map after splicing, and then perform a convolution operation on the current local feature map (the convolution kernel is 2 1*1) to obtain The specified local feature map corresponding to the C*W-2 is spliced with the specified local feature map corresponding to the local feature map C*W-2 and the local feature map C*W-3 to obtain the updated current local feature map, This cycle is performed until the specified local feature map corresponding to the C*W-H is obtained, wherein the dimension of the specified local feature map corresponding to the C*W-H is H*C*W, and then the specified local feature map corresponding to C*W-H (also A feature map of H*C*W) along the high direction, slice from bottom to top to obtain H local feature maps of C*W from bottom to top (for example, C*W+1, C*W+ 2 to C*W+H), for the C*W+1, C*W+2 to C*W+H and then perform convolution splicing operation to obtain the specified local feature map corresponding to C*W+H (dimension is H*C*W), and then the specified local feature map corresponding to C*W+H follows the width of the cube (corresponding to W) from left to right, and divides the feature map of H*C*W into W Local feature maps of H*C to obtain W local feature maps of H*C from left to right (for example, H*C-1, H*C-2 to H*C-W), for the H*C- 1. Perform the preset convolution splicing operation from H*C-2 to H*C-W to obtain the specified local feature map corresponding to H*C-W (dimension is H*C*W), and then the specified local corresponding to H*C-W. The feature map is from right to left along the width of the cube (corresponding to W), and the feature map of H*C*W is divided into local feature maps from right to left (for example, H*C+1, H*C+2 to H*C+W), perform a preset convolution splicing operation on the H*C+1, H*C+2 to H*C+W, obtain the specified local feature map corresponding to the H*C+W, and use the The specified local feature map corresponding to H*C+W is used as the feature map of the target sample.

S24,根据该目标样本特征图确定该车道线检测样本图像中每个像素属于车道线的第一概率。S24: Determine a first probability that each pixel in the lane line detection sample image belongs to a lane line according to the target sample feature map.

本步骤中,由于该目标样本特征图包括了该车道线检测样本图像中的更多细节信息,因此根据该目标样本特征图得到的该车道线检测样本图像中每个像素属于车道线的第一概率的准确性更高。In this step, since the target sample feature map includes more detailed information in the lane line detection sample image, each pixel in the lane line detection sample image obtained according to the target sample feature map belongs to the first lane line detection sample image. Probability is more accurate.

S25,根据每个像素的该第一概率确定该车道线检测样本图像中车道线的第一预测位置。S25: Determine the first predicted position of the lane line in the lane line detection sample image according to the first probability of each pixel.

本步骤中,可以将该第一概率大于或者等于第一预设概率阈值的像素位置作为该车道线检测样本图像中车道线的第一预测位置。In this step, the pixel position of which the first probability is greater than or equal to the first preset probability threshold may be used as the first predicted position of the lane line in the lane line detection sample image.

通过以上S21至S25,能够通过该分割网络得到细节信息更多的目标样本特征图,然后通过该目标样本特征图确定该车道线检测样本图像中每个像素属于车道线的第一概率,根据该第一概率确定第一预测位置能够有效提升该第一预测位置的准确性。Through the above S21 to S25, the target sample feature map with more detailed information can be obtained through the segmentation network, and then the first probability that each pixel in the lane line detection sample image belongs to the lane line is determined through the target sample feature map. Determining the first predicted position with the first probability can effectively improve the accuracy of the first predicted position.

可选地,该预设初始模型,还用于:Optionally, the preset initial model is also used for:

通过该自底向上子网络向该分类网络输入最小尺度样本特征图(如图2中的特征图x4);Input the minimum scale sample feature map (feature map x 4 in FIG. 2 ) to the classification network through the bottom-up sub-network;

通过该分类网络将该最小尺度样本特征图划分为多个锚框,确定每个锚框属于该车道线的第二概率,并根据该第二概率确定该车道线检测样本图像中车道线的第二预测位置。The minimum-scale sample feature map is divided into multiple anchor boxes through the classification network, the second probability that each anchor box belongs to the lane line is determined, and the first probability of the lane line in the lane line detection sample image is determined according to the second probability. 2. Predict the location.

需要说明的是,将特征图x4划分为锚框的方式在现有技术中较多,也比较容易获取,本公开在此不再赘述,通过分类网络(可以是全连接层)确定每个锚框属于该车道线的第二概率的具体细节属于本领域内常见的计算过程,本公开对此不作限定。另外,这里可以将该第二概率大于或者等于第二预设概率阈值的锚框对应的像素位置作为该第二预测位置。It should be noted that there are many ways to divide the feature map x 4 into anchor boxes in the prior art, and it is relatively easy to obtain, and the present disclosure will not repeat them here. The classification network (which can be a fully connected layer) determines each The specific details of the second probability that the anchor box belongs to the lane line belongs to a common calculation process in the art, which is not limited in the present disclosure. In addition, here, the pixel position corresponding to the anchor frame whose second probability is greater than or equal to the second preset probability threshold may be used as the second predicted position.

以上技术方案,能够通过该分类网络获取到该车道线检测样本图像中车道线的第二预测位置,由于该第二预测位置是通过FPN(Feature Pyramid Networks,特征金字塔网络)输出的最小尺度样本特征图得到的,该最小尺度样本特征包含了丰富的上下文特征信息,因此也能够有效提升该第二预测位置的准确性。With the above technical solution, the second predicted position of the lane line in the lane line detection sample image can be obtained through the classification network, because the second predicted position is the minimum scale sample feature output by FPN (Feature Pyramid Networks, Feature Pyramid Network). As shown in the figure, the minimum scale sample feature contains rich contextual feature information, so it can also effectively improve the accuracy of the second predicted position.

图4是本公开一示例性实施例示出的一种预设车道线检测模型的训练流程图;如图4所示,该预设车道线检测模型可以通过以下步骤训练得到:FIG. 4 is a training flow chart of a preset lane line detection model shown in an exemplary embodiment of the present disclosure; as shown in FIG. 4 , the preset lane line detection model can be obtained by training through the following steps:

步骤401,获取多个车道线检测样本图像,该车道线检测图像包括车道线标注位置。Step 401: Acquire a plurality of lane line detection sample images, where the lane line detection images include lane line marked positions.

其中,该车道线检测样本图像可以是各种车道线检测场景(例如直线车道线的检测场景,双曲线车道线的检测场景,B样条曲线车道线的检测场景等)中的任一图像,可以在该车道线检测样本图像中标注车道线位置。Wherein, the lane line detection sample image may be any image in various lane line detection scenarios (such as the detection scene of straight lane lines, the detection scene of hyperbolic lane lines, the detection scene of B-spline curve lane lines, etc.), The lane line position can be marked in the lane line detection sample image.

步骤402,通过预设初始模型中的该特征提取网络获取该车道线检测样本图像对应的多尺度样本特征图,根据该多尺度样本特征图中的最小尺度样本特征图确定包括图像全局信息的第一样本特征图。Step 402: Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the feature extraction network in the preset initial model, and determine the first sample including the global image information according to the minimum-scale sample feature map in the multi-scale sample feature map. A sample feature map.

其中,该特征提取网络,可以包括特征金字塔网络和全局特征提取网络,该特征金字塔网络包括自底向上子网络和自顶向下子网络,该全局特征提取网络可以是Transformer网络中的Encoder(编码器),通过该特征金字塔网络中的自底向上子网络获取该车道线检测样本图像对应的多尺度样本特征图(如图2中的特征图x1、特征图x2、特征图x3、特征图x4),并将该多尺度样本特征图中的最小尺度样本特征图(如图2中的特征图x4)输入该全局特征提取网络,以使该全局特征提取网络向该分割网络和该自顶向下子网络输出该第一样本特征图。Wherein, the feature extraction network may include a feature pyramid network and a global feature extraction network, the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the global feature extraction network may be the Encoder (encoder) in the Transformer network. ), obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the bottom-up sub-network in the feature pyramid network (feature map x 1 , feature map x 2 , feature map x 3 , feature map in Figure 2 , feature map x 3 , feature map Figure x 4 ), and input the minimum scale sample feature map in the multi-scale sample feature map (feature map x 4 in FIG. 2 ) into the global feature extraction network, so that the global feature extraction network is directed to the segmentation network and The top-down sub-network outputs the first sample feature map.

步骤403,通过该分割网络根据该第一样本特征图确定该车道线检测样本图像中车道线的第一预测位置。Step 403 , determining the first predicted position of the lane line in the lane line detection sample image through the segmentation network according to the first sample feature map.

本步骤中,将该第一样本特征图输入该自顶向下子网络,以使该自顶向下子网络向该分割网络输入多尺度的第二样本特征图(如图2中的特征图X1、特征图X2、特征图X3、特征图X4),以使该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置。In this step, the first sample feature map is input into the top-down sub-network, so that the top-down sub-network inputs a multi-scale second sample feature map to the segmentation network (feature map X in FIG. 2 ). 1. The feature map X 2 , the feature map X 3 , the feature map X 4 ), so that the segmentation network determines the number of lane lines in the lane line detection sample image according to the first sample feature map and the second sample feature map. a predicted location.

其中,该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置的具体实施方式可以参见以上S21至S25所示的步骤,本公开在此不再赘述。Wherein, the specific implementation of the segmentation network determining the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map may refer to the steps shown in the above S21 to S25, The present disclosure will not be repeated here.

步骤404,根据该第一预测位置和该车道线标注位置确定第一损失函数对应的第一损失值。Step 404: Determine a first loss value corresponding to the first loss function according to the first predicted position and the marked position of the lane line.

其中,该第一损失函数可以是交叉熵损失函数,也可以是Focal Loss(灶性损失)损失函数,还可以是现有技术中的其他损失函数。The first loss function may be a cross-entropy loss function, a Focal Loss (focal loss) loss function, or other loss functions in the prior art.

步骤405,通过该分类网路根据该最小尺度样本特征图确定该车道线检测样本图像中车道线的第二预测位置。Step 405: Determine the second predicted position of the lane line in the lane line detection sample image through the classification network according to the minimum scale sample feature map.

步骤406,根据该第二预测位置和该车道线标注位置确定第二损失函数对应的第二损失值。Step 406: Determine a second loss value corresponding to the second loss function according to the second predicted position and the marked position of the lane line.

其中,该第二损失函数也可以是交叉熵损失函数或者Focal Loss损失函数,还可以是现有技术中的其他损失函数。The second loss function may also be a cross-entropy loss function or a Focal Loss loss function, and may also be other loss functions in the prior art.

步骤407,根据该第一损失值和该第二损失值确定第三损失函数对应的第三损失值。Step 407: Determine a third loss value corresponding to the third loss function according to the first loss value and the second loss value.

示例地,该第三损失函数可以是Ltotal=αLcls+βLseg,其中,Ltotal表示第三损失值,α和β是加权系数。Lcls是第二损失函数对应的第二损失值,Lseg是第一损失函数对应的第一损失值。For example, the third loss function may be L total =αL cls +βL seg , where L total represents the third loss value, and α and β are weighting coefficients. L cls is the second loss value corresponding to the second loss function, and L seg is the first loss value corresponding to the first loss function.

步骤408,确定该第三损失值是否大于或者等于预设损失值阈值。Step 408: Determine whether the third loss value is greater than or equal to a preset loss value threshold.

本步骤中,在该第三损失值大于或者等于预设损失值阈值的情况下,执行步骤409,在确定该第三损失值小于该预设损失值阈值的情况下,执行步骤410。In this step, when the third loss value is greater than or equal to the preset loss value threshold, step 409 is performed, and when it is determined that the third loss value is less than the preset loss value threshold, step 410 is performed.

步骤409,调整该预设初始模型的模型参数,以得到更新后的目标模型。Step 409: Adjust the model parameters of the preset initial model to obtain an updated target model.

本步骤之后,可以再次跳转至步骤402,以再次执行步骤402至步骤408。After this step, it is possible to jump to step 402 again to perform steps 402 to 408 again.

步骤410,删除当前该目标模型中的该分割网络,以得到该预设车道线检测模型。Step 410, delete the segmentation network in the current target model to obtain the preset lane line detection model.

以上技术方案,通过该分割网络和该分类网络共同训练该预设车道线检测模型,能够有效提升模型检测结果的准确性,为了提升模型处理速度,在应用阶段能够将该分割网络删除,只保留特征提取网络和该分类网络,从而不仅能够保证车道线检测结果的准确性,也能够有效提升模型的检测效率。In the above technical solution, the preset lane line detection model is jointly trained by the segmentation network and the classification network, which can effectively improve the accuracy of the model detection results. In order to improve the processing speed of the model, the segmentation network can be deleted in the application stage, and only the The feature extraction network and the classification network can not only ensure the accuracy of the lane line detection results, but also effectively improve the detection efficiency of the model.

可选地,通过以上图4所示步骤得到的预设车道线检测模型确定该目标检测图像中车道线的目标位置时,可以包括以下图5所示步骤,图5是根据图4所示实施例示出的一种车道线检测方法的流程图;如图5所示,该方法包括:Optionally, when determining the target position of the lane line in the target detection image by the preset lane line detection model obtained by the steps shown in FIG. 4 above, the following steps shown in FIG. 5 may be included, and FIG. A flow chart of a lane line detection method illustrated; as shown in Figure 5, the method includes:

步骤501,获取目标检测图像。Step 501, acquiring a target detection image.

其中,该目标检测图像可以是各种车道线检测场景(例如直线车道线的检测场景,双曲线车道线的检测场景,B样条曲线车道线的检测场景等)中的任一待检测图像,该待检测图像中可以包括待检测的车道线。Wherein, the target detection image may be any image to be detected in various lane line detection scenarios (for example, the detection scene of straight lane lines, the detection scene of hyperbolic lane lines, the detection scene of B-spline curve lane lines, etc.), The to-be-detected image may include to-be-detected lane lines.

步骤502,通过该特征提取网络获取该目标检测图像对应的目标特征图。Step 502: Obtain a target feature map corresponding to the target detection image through the feature extraction network.

其中,该特征提取网络包括可以包括特征金字塔网络,该特征金字塔网络包括自底向上子网络和自顶向下子网络,该目标特征图可以是自底向上子网络输入的最小尺度特征图(如图2中的特征图x4)。Wherein, the feature extraction network may include a feature pyramid network, and the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the target feature map may be the minimum-scale feature map input by the bottom-up sub-network (as shown in Fig. Feature maps in 2 x 4 ).

步骤503,将该目标特征图输入该分类网络,以得到该分类网络输出的该目标检测图像中车道线的该目标位置。Step 503: Input the target feature map into the classification network to obtain the target position of the lane line in the target detection image output by the classification network.

通过以上步骤501至步骤503所示方法确定该目标检测图像中车道线的该目标位置,不仅能够保证车道线检测结果的准确性,也能够有效提升模型的检测效率。Determining the target position of the lane line in the target detection image by the methods shown in the above steps 501 to 503 can not only ensure the accuracy of the lane line detection result, but also effectively improve the detection efficiency of the model.

图6是本公开一示例性实施例示出的一种车道线检测装置的框图,如图6所示,该装置可以包括:FIG. 6 is a block diagram of a lane line detection apparatus according to an exemplary embodiment of the present disclosure. As shown in FIG. 6 , the apparatus may include:

获取模块601,被配置为获取目标检测图像;an acquisition module 601, configured to acquire a target detection image;

确定模块602,被配置为将该目标检测图像输入预设车道线检测模型中,以使该预设车道线检测模型输出该目标检测图像中车道线的目标位置;The determining module 602 is configured to input the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image;

其中,该预设车道线检测模型通过以下方式训练得到:Among them, the preset lane line detection model is obtained by training in the following ways:

获取多个车道线检测样本图像,该车道线检测图像包括车道线标注位置;Acquiring a plurality of lane line detection sample images, the lane line detection images including the lane line marking positions;

将每个该车道线检测样本图像输入预设初始模型,该预设初始模型包括分割网络和分类网络,该分割网络用于确定车道线检测样本图像中车道线的第一预测位置,该分类网络用于确定车道线检测样本图像中车道线的第二预测位置;Each of the lane line detection sample images is input into a preset initial model, the preset initial model includes a segmentation network and a classification network, the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, the classification network for determining the second predicted position of the lane line in the lane line detection sample image;

根据该第一预测位置,该第二预测位置和该车道线标注位置对该预设初始模型进行训练,以得到该预设车道线检测模型。According to the first predicted position, the second predicted position and the lane marking position, the preset initial model is trained to obtain the preset lane line detection model.

以上技术方案,通过获取目标检测图像;将该目标检测图像输入预设车道线检测模型中,以使该预设车道线检测模型输出该目标检测图像中车道线的目标位置,其中,该预设车道线检测模型是通过包括分割网络和分类网络的预设初始模型训练得到,能够有效提升各种检测场景中车道线检测结果的准确性。In the above technical solution, by acquiring a target detection image; inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image, wherein the preset lane line detection model The lane line detection model is trained by a preset initial model including a segmentation network and a classification network, which can effectively improve the accuracy of lane line detection results in various detection scenarios.

可选地,该预设初始模型还包括特征提取网络,该特征提取网络与该分类网络和该分割网络耦合,该预设初始模型,用于Optionally, the preset initial model further includes a feature extraction network, the feature extraction network is coupled with the classification network and the segmentation network, and the preset initial model is used for

通过该特征提取网络获取每个该车道线检测样本图像对应的多尺度样本特征图,并根据该多尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain a multi-scale sample feature map corresponding to each lane line detection sample image through the feature extraction network, and determine a first sample feature map including global image information according to the multi-scale sample feature map;

通过该分割网络根据该第一样本特征图确定该车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

通过该分类网络根据该多尺度样本特征图确定该车道线检测样本图像中车道线的第二预测位置。The second predicted position of the lane line in the lane line detection sample image is determined by the classification network according to the multi-scale sample feature map.

可选地,该预设车道线检测模型通过以下方式训练得到,Optionally, the preset lane line detection model is obtained by training in the following manner:

通过预设初始模型中的该特征提取网络获取该车道线检测样本图像对应的多尺度样本特征图,根据该多尺度样本特征图中的最小尺度样本特征图确定包括图像全局信息的第一样本特征图;The multi-scale sample feature map corresponding to the lane line detection sample image is obtained through the feature extraction network in the preset initial model, and the first sample including the image global information is determined according to the minimum-scale sample feature map in the multi-scale sample feature map. feature map;

通过该分割网络根据该第一样本特征图确定该车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network;

根据该第一预测位置和该车道线标注位置确定第一损失函数对应的第一损失值;Determine a first loss value corresponding to the first loss function according to the first predicted position and the lane marking position;

通过该分类网路根据该最小尺度样本特征图确定该车道线检测样本图像中车道线的第二预测位置;Determine the second predicted position of the lane line in the lane line detection sample image according to the minimum scale sample feature map by the classification network;

根据该第二预测位置和该车道线标注位置确定第二损失函数对应的第二损失值;Determine a second loss value corresponding to the second loss function according to the second predicted position and the lane marking position;

根据该第一损失值和该第二损失值确定第三损失函数对应的第三损失值;Determine a third loss value corresponding to the third loss function according to the first loss value and the second loss value;

在该第三损失值大于或者等于预设损失值阈值的情况下,调整该预设初始模型的模型参数,以得到更新后的目标模型,并再次执行通过预设初始模型中的该特征提取网络获取该车道线检测样本图像对应的多尺度样本特征图,至该根据该第一损失值和该第二损失值确定第三损失函数对应的第三损失值的步骤,直至在确定该第三损失值小于该预设损失值阈值的情况下,删除当前该目标模型中的该分割网络,以得到该预设车道线检测模型。In the case that the third loss value is greater than or equal to the preset loss value threshold, adjust the model parameters of the preset initial model to obtain an updated target model, and execute the feature extraction network in the preset initial model again. Obtain the multi-scale sample feature map corresponding to the lane line detection sample image, to the step of determining the third loss value corresponding to the third loss function according to the first loss value and the second loss value, until the third loss is determined. When the value is less than the preset loss value threshold, delete the segmentation network in the current target model to obtain the preset lane line detection model.

可选地,该特征提取网络,包括特征金字塔网络和全局特征提取网络,该特征金字塔网络包括自底向上子网络和自顶向下子网络,该全局特征提取网络的输入端与该自底向上子网络的输出端耦合,该全局特征提取网络的输出端与该自顶向下子网络的输入端耦合,该全局特征提取网络的输出端还与该分割网络的输入端耦合,该自顶向下子网络的输出端也与该分割网络的输入端耦合,该预设初始模型用于,Optionally, the feature extraction network includes a feature pyramid network and a global feature extraction network, the feature pyramid network includes a bottom-up sub-network and a top-down sub-network, and the input end of the global feature extraction network is connected to the bottom-up sub-network. The output end of the network is coupled, the output end of the global feature extraction network is coupled with the input end of the top-down sub-network, the output end of the global feature extraction network is also coupled with the input end of the segmentation network, and the top-down sub-network is coupled The output of is also coupled to the input of the segmentation network, and the preset initial model is used to,

通过该特征金字塔网络中的自底向上子网络获取该车道线检测样本图像对应的多尺度样本特征图,并将该多尺度样本特征图中的最小尺度样本特征图输入该全局特征提取网络,以使该全局特征提取网络向该分割网络和该自顶向下子网络输出该第一样本特征图;The multi-scale sample feature map corresponding to the lane line detection sample image is obtained through the bottom-up sub-network in the feature pyramid network, and the minimum-scale sample feature map in the multi-scale sample feature map is input into the global feature extraction network to obtain causing the global feature extraction network to output the first sample feature map to the segmentation network and the top-down sub-network;

通过该自顶向下子网络向该分割网络输入多尺度的第二样本特征图,以使该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置。The multi-scale second sample feature map is input to the segmentation network through the top-down sub-network, so that the segmentation network determines the lane in the lane line detection sample image according to the first sample feature map and the second sample feature map The first predicted position of the line.

可选地,该分割网络根据该第一样本特征图和该第二样本特征图确定该车道线检测样本图像中车道线的第一预测位置,包括:Optionally, the segmentation network determines the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map, including:

通过该分割网络对该第一样本特征图和该第二样本特征图进行上采样处理,以得到待分割样本特征图;Perform up-sampling processing on the first sample feature map and the second sample feature map through the segmentation network to obtain the sample feature map to be segmented;

按照预设分割方式对该待分割样本特征图进行分割,以得到多个局部特征图;Segment the feature map of the sample to be segmented according to a preset segmentation method to obtain a plurality of local feature maps;

对该多个局部特征图进行卷积和拼接处理,以得到目标样本特征图;Perform convolution and splicing processing on the multiple local feature maps to obtain the target sample feature map;

根据该目标样本特征图确定该车道线检测样本图像中每个像素属于车道线的第一概率;Determine the first probability that each pixel in the lane line detection sample image belongs to the lane line according to the target sample feature map;

根据每个像素的该第一概率确定该车道线检测样本图像中车道线的第一预测位置。The first predicted position of the lane line in the lane line detection sample image is determined according to the first probability of each pixel.

可选地,该按照预设分割方式包括:Optionally, the preset division method includes:

将该待分割样本特征图划分为H个C*W的局部特征图,将该待分割样本特征图划分为C个H*W的局部特征图,将该待分割样本特征图划分为W个H*C的局部特征图中的至少一个,其中,H为待分割样本特征图的层数,C为待分割样本特征图的长度,W为该待分割样本特征图的宽度。The feature map of the sample to be divided is divided into H local feature maps of C*W, the feature map of the sample to be divided is divided into C local feature maps of H*W, and the feature map of the sample to be divided is divided into W H*W local feature maps *At least one of the local feature maps of C, where H is the number of layers of the feature map of the sample to be segmented, C is the length of the feature map of the sample to be segmented, and W is the width of the feature map of the sample to be segmented.

可选地,该对该多个局部特征图进行卷积和拼接处理,以得到目标样本特征图,包括:Optionally, performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps, including:

获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图;Obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the specified local feature map after convolution;

将该指定局部特征图与该当前局部特征图对应的下一个局部特征图进行拼接后,得到更新后的当前局部特征图;After splicing the specified local feature map with the next local feature map corresponding to the current local feature map, an updated current local feature map is obtained;

确定该当前局部特征图对应的下一个该局部特征图是否为该多个局部特征图中最后一个;determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定该当前局部特征图对应的下一个该局部特征图为非该多个局部特征图中最后一个的情况下,再次执行获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,至确定该当前局部特征图对应的下一个该局部特征图是否为该多个局部特征图中最后一个的步骤;When it is determined that the next local feature map corresponding to the current local feature map is not the last one of the multiple local feature maps, obtain the current local feature map again, and perform a convolution operation on the current local feature map to obtain Obtaining the specified local feature map after convolution, to the step of determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps;

在确定当前局部特征图对应的下一个该局部特征图为该多个局部特征图中最后一个的情况下,获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,并将该当前局部特征图对应的该指定局部特征图作为该目标样本特征图。When it is determined that the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps, obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the convolutional The specified local feature map of , and the specified local feature map corresponding to the current local feature map is used as the target sample feature map.

可选地,该自底向上子网络的输出端与该分类网络的输入端耦合,该预设初始模型,用于:Optionally, the output end of the bottom-up sub-network is coupled with the input end of the classification network, and the preset initial model is used for:

通过该自底向上子网络向该分类网络输入最小尺度样本特征图;Input the minimum scale sample feature map to the classification network through the bottom-up sub-network;

通过该分类网络将该最小尺度样本特征图划分为多个锚框,确定每个锚框属于该车道线的第二概率,并根据该第二概率确定该车道线检测样本图像中车道线的第二预测位置。The minimum-scale sample feature map is divided into multiple anchor boxes through the classification network, the second probability that each anchor box belongs to the lane line is determined, and the first probability of the lane line in the lane line detection sample image is determined according to the second probability. 2. Predict the location.

可选地,该确定模块,被配置为:Optionally, the determining module is configured to:

通过该特征提取网络获取该目标检测图像对应的目标特征图;Obtain the target feature map corresponding to the target detection image through the feature extraction network;

将该目标特征图输入该分类网络,以得到该分类网络输出的该目标检测图像中车道线的该目标位置。The target feature map is input into the classification network to obtain the target position of the lane line in the target detection image output by the classification network.

以上技术方案,通过该分割网络和该分类网络共同训练该预设车道线检测模型,能够有效提升模型检测结果的准确性,为了提升模型处理速度,在应用阶段能够将该分割网络删除,只保留特征提取网络和该分类网络,从而不仅能够保证车道线检测结果的准确性,也能够有效提升模型的检测效率。In the above technical solution, the preset lane line detection model is jointly trained by the segmentation network and the classification network, which can effectively improve the accuracy of the model detection results. In order to improve the processing speed of the model, the segmentation network can be deleted in the application stage, and only the The feature extraction network and the classification network can not only ensure the accuracy of the lane line detection results, but also effectively improve the detection efficiency of the model.

关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.

本公开还提供一种车辆,所述车辆包括以上图6所述的车道线检测装置。The present disclosure also provides a vehicle including the lane line detection device described above in FIG. 6 .

本公开还提供一种计算机可读存储介质,其上存储有计算机程序指令,该程序指令被处理器执行时实现本公开提供的车道线检测方法的步骤。The present disclosure also provides a computer-readable storage medium on which computer program instructions are stored, and when the program instructions are executed by a processor, implement the steps of the lane line detection method provided by the present disclosure.

图7是根据一示例性实施例示出的一种车道线检测装置的框图。例如,装置800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。Fig. 7 is a block diagram of a lane line detection apparatus according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.

参照图7,装置800可以包括以下一个或多个组件:处理组件802,存储器804,电力组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。7, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and communication component 816.

处理组件802通常控制装置800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的车道线检测方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。The processing component 802 generally controls the overall operation of the device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the above-described lane line detection method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.

存储器804被配置为存储各种类型的数据以支持在装置800的操作。这些数据的示例包括用于在装置800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。Memory 804 is configured to store various types of data to support operations at device 800 . Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.

电力组件806为装置800的各种组件提供电力。电力组件806可以包括电源管理系统,一个或多个电源,及其他与为装置800生成、管理和分配电力相关联的组件。Power component 806 provides power to various components of device 800 . Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power to device 800 .

多媒体组件808包括在该装置800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。该触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与该触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当装置800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。Multimedia component 808 includes screens that provide an output interface between the device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor can sense not only the boundaries of the touch or swipe action, but also the duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the apparatus 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.

音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当装置800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。Audio component 810 is configured to output and/or input audio signals. For example, audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when device 800 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 804 or transmitted via communication component 816 . In some embodiments, audio component 810 also includes a speaker for outputting audio signals.

I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.

传感器组件814包括一个或多个传感器,用于为装置800提供各个方面的状态评估。例如,传感器组件814可以检测到装置800的打开/关闭状态,组件的相对定位,例如该组件为装置800的显示器和小键盘,传感器组件814还可以检测装置800或装置800一个组件的位置改变,用户与装置800接触的存在或不存在,装置800方位或加速/减速和装置800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of device 800 . For example, the sensor assembly 814 can detect the open/closed state of the device 800, the relative positioning of the components, such as the display and keypad of the device 800, the sensor assembly 814 can also detect the position change of the device 800 or a component of the device 800, Presence or absence of user contact with device 800 , device 800 orientation or acceleration/deceleration and temperature changes of device 800 . Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

通信组件816被配置为便于装置800和其他设备之间有线或无线方式的通信。装置800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,该通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。Communication component 816 is configured to facilitate wired or wireless communication between apparatus 800 and other devices. Device 800 may access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.

在示例性实施例中,装置800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述车道线检测方法。In an exemplary embodiment, apparatus 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components are implemented for implementing the above lane line detection method.

在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器804,上述指令可由装置800的处理器820执行以完成上述车道线检测方法。例如,该非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, a non-transitory computer-readable storage medium including instructions, such as a memory 804 including instructions, is also provided, and the instructions are executable by the processor 820 of the apparatus 800 to complete the lane line detection method described above. For example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.

在另一示例性实施例中,还提供一种计算机程序产品,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行上述的车道线检测方法的代码部分。In another exemplary embodiment, there is also provided a computer program product comprising a computer program executable by a programmable apparatus, the computer program having, when executed by the programmable apparatus, for performing the above The code section of the lane line detection method.

本领域技术人员在考虑说明书及实践本公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the present disclosure. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or techniques in the technical field not disclosed by the present disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.

应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1.一种车道线检测方法,其特征在于,所述方法包括:1. a lane line detection method, is characterized in that, described method comprises: 获取目标检测图像;Get the target detection image; 将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置;inputting the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image; 其中,所述预设车道线检测模型通过以下方式训练得到:Wherein, the preset lane line detection model is obtained by training in the following ways: 获取多个车道线检测样本图像,所述车道线检测图像包括车道线标注位置;acquiring a plurality of lane line detection sample images, where the lane line detection images include lane line marked positions; 将每个所述车道线检测样本图像输入预设初始模型,所述预设初始模型包括分割网络和分类网络,所述分割网络用于确定车道线检测样本图像中车道线的第一预测位置,所述分类网络用于确定车道线检测样本图像中车道线的第二预测位置;inputting each of the lane line detection sample images into a preset initial model, the preset initial model includes a segmentation network and a classification network, and the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, The classification network is used to determine the second predicted position of the lane line in the lane line detection sample image; 根据所述第一预测位置,所述第二预测位置和所述车道线标注位置对所述预设初始模型进行训练,以得到所述预设车道线检测模型。The preset initial model is trained according to the first predicted position, the second predicted position and the lane line marking position to obtain the preset lane line detection model. 2.根据权利要求1所述的方法,其特征在于,所述预设初始模型还包括特征提取网络,所述特征提取网络与所述分类网络和所述分割网络耦合,所述预设初始模型用于:2. The method according to claim 1, wherein the preset initial model further comprises a feature extraction network, the feature extraction network is coupled with the classification network and the segmentation network, and the preset initial model Used for: 通过所述特征提取网络获取每个所述车道线检测样本图像对应的多尺度样本特征图,并根据所述多尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain a multi-scale sample feature map corresponding to each of the lane line detection sample images through the feature extraction network, and determine a first sample feature map including global image information according to the multi-scale sample feature map; 通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network; 通过所述分类网络根据所述多尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置。The second predicted position of the lane line in the lane line detection sample image is determined by the classification network according to the multi-scale sample feature map. 3.根据权利要求2所述的方法,其特征在于,所述预设车道线检测模型通过以下方式训练得到:3. The method according to claim 2, wherein the preset lane line detection model is obtained by training in the following manner: 通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,根据所述多尺度样本特征图中的最小尺度样本特征图确定包括图像全局信息的第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the feature extraction network in the preset initial model, and determine the first feature map including the global image information according to the minimum-scale sample feature map in the multi-scale sample feature map. A sample feature map; 通过所述分割网络根据所述第一样本特征图确定所述车道线检测样本图像中车道线的第一预测位置;Determine the first predicted position of the lane line in the lane line detection sample image according to the first sample feature map by the segmentation network; 根据所述第一预测位置和所述车道线标注位置确定第一损失函数对应的第一损失值;determining a first loss value corresponding to the first loss function according to the first predicted position and the lane marking position; 通过所述分类网路根据所述最小尺度样本特征图确定所述车道线检测样本图像中车道线的第二预测位置;Determine the second predicted position of the lane line in the lane line detection sample image according to the minimum scale sample feature map by the classification network; 根据所述第二预测位置和所述车道线标注位置确定第二损失函数对应的第二损失值;determining a second loss value corresponding to the second loss function according to the second predicted position and the lane marking position; 根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值;determining a third loss value corresponding to the third loss function according to the first loss value and the second loss value; 在所述第三损失值大于或者等于预设损失值阈值的情况下,调整所述预设初始模型的模型参数,以得到更新后的目标模型,并再次执行通过预设初始模型中的所述特征提取网络获取所述车道线检测样本图像对应的多尺度样本特征图,至所述根据所述第一损失值和所述第二损失值确定第三损失函数对应的第三损失值的步骤,直至在确定所述第三损失值小于所述预设损失值阈值的情况下,删除当前所述目标模型中的所述分割网络,以得到所述预设车道线检测模型。In the case that the third loss value is greater than or equal to the preset loss value threshold, adjust the model parameters of the preset initial model to obtain an updated target model, and execute the process of passing the preset initial model again. The feature extraction network obtains the multi-scale sample feature map corresponding to the lane line detection sample image, and the step of determining the third loss value corresponding to the third loss function according to the first loss value and the second loss value, Until it is determined that the third loss value is smaller than the preset loss value threshold, the segmentation network in the current target model is deleted to obtain the preset lane line detection model. 4.根据权利要求2所述的方法,其特征在于,所述特征提取网络,包括特征金字塔网络和全局特征提取网络,所述特征金字塔网络包括自底向上子网络和自顶向下子网络,所述全局特征提取网络的输入端与所述自底向上子网络的输出端耦合,所述全局特征提取网络的输出端与所述自顶向下子网络的输入端耦合,所述全局特征提取网络的输出端还与所述分割网络的输入端耦合,所述自顶向下子网络的输出端也与所述分割网络的输入端耦合,所述预设初始模型用于:4. The method according to claim 2, wherein the feature extraction network comprises a feature pyramid network and a global feature extraction network, the feature pyramid network comprises a bottom-up sub-network and a top-down sub-network, and the The input end of the global feature extraction network is coupled with the output end of the bottom-up sub-network, the output end of the global feature extraction network is coupled with the input end of the top-down sub-network, and the output end of the global feature extraction network is coupled. The output terminal is also coupled to the input terminal of the segmentation network, the output terminal of the top-down sub-network is also coupled to the input terminal of the segmentation network, and the preset initial model is used for: 通过所述特征金字塔网络中的自底向上子网络获取所述车道线检测样本图像对应的多尺度样本特征图,并将所述多尺度样本特征图中的最小尺度样本特征图输入所述全局特征提取网络,以使所述全局特征提取网络向所述分割网络和所述自顶向下子网络输出所述第一样本特征图;Obtain the multi-scale sample feature map corresponding to the lane line detection sample image through the bottom-up sub-network in the feature pyramid network, and input the minimum-scale sample feature map in the multi-scale sample feature map into the global feature an extraction network, so that the global feature extraction network outputs the first sample feature map to the segmentation network and the top-down sub-network; 通过所述自顶向下子网络向所述分割网络输入多尺度的第二样本特征图,以使所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置。A multi-scale second sample feature map is input to the segmentation network through the top-down sub-network, so that the segmentation network determines the lane according to the first sample feature map and the second sample feature map The first predicted position of the lane line in the line detection sample image. 5.根据权利要求4所述的方法,其特征在于,所述分割网络根据所述第一样本特征图和所述第二样本特征图确定所述车道线检测样本图像中车道线的第一预测位置,包括:5 . The method according to claim 4 , wherein the segmentation network determines the first characteristic of the lane line in the lane line detection sample image according to the first sample feature map and the second sample feature map. 6 . Predicted locations, including: 通过所述分割网络对所述第一样本特征图和所述第二样本特征图进行上采样处理,以得到待分割样本特征图;Perform up-sampling processing on the first sample feature map and the second sample feature map through the segmentation network to obtain a sample feature map to be segmented; 按照预设分割方式对所述待分割样本特征图进行分割,以得到多个局部特征图;Segment the feature map of the sample to be segmented according to a preset segmentation method to obtain a plurality of local feature maps; 对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图;Performing convolution and splicing processing on the multiple local feature maps to obtain target sample feature maps; 根据所述目标样本特征图确定所述车道线检测样本图像中每个像素属于车道线的第一概率;Determine the first probability that each pixel in the lane line detection sample image belongs to the lane line according to the target sample feature map; 根据每个像素的所述第一概率确定所述车道线检测样本图像中车道线的第一预测位置。The first predicted position of the lane line in the lane line detection sample image is determined according to the first probability of each pixel. 6.根据权利要求5所述的方法,其特征在于,所述按照预设分割方式包括:6. The method according to claim 5, wherein the dividing according to a preset method comprises: 将所述待分割样本特征图划分为H个C*W的局部特征图,将所述待分割样本特征图划分为C个H*W的局部特征图,将所述待分割样本特征图划分为W个H*C的局部特征图中的至少一个,其中,H为待分割样本特征图的层数,C为待分割样本特征图的长度,W为所述待分割样本特征图的宽度。The feature maps of the samples to be divided are divided into H local feature maps of C*W, the feature maps of the samples to be divided are divided into C local feature maps of H*W, and the feature maps of the samples to be divided are divided into At least one of W H*C local feature maps, where H is the number of layers of the sample feature map to be segmented, C is the length of the sample feature map to be segmented, and W is the width of the sample feature map to be segmented. 7.根据权利要求5所述的方法,其特征在于,所述对所述多个局部特征图进行卷积和拼接处理,以得到目标样本特征图,包括:7. The method according to claim 5, wherein, performing convolution and splicing processing on the plurality of local feature maps to obtain target sample feature maps, comprising: 获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图;Obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain the specified local feature map after convolution; 将所述指定局部特征图与所述当前局部特征图对应的下一个局部特征图进行拼接后,得到更新后的当前局部特征图;After splicing the specified local feature map and the next local feature map corresponding to the current local feature map, an updated current local feature map is obtained; 确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个;determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps; 在确定所述当前局部特征图对应的下一个所述局部特征图为非所述多个局部特征图中最后一个的情况下,再次执行获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,至确定所述当前局部特征图对应的下一个所述局部特征图是否为所述多个局部特征图中最后一个的步骤;In the case where it is determined that the next local feature map corresponding to the current local feature map is not the last one of the multiple local feature maps, obtain the current local feature map again, and perform convolution on the current local feature map operation to obtain the specified local feature map after convolution, to the step of determining whether the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps; 在确定当前局部特征图对应的下一个所述局部特征图为所述多个局部特征图中最后一个的情况下,获取当前局部特征图,并对当前局部特征图进行卷积操作,以得到卷积后的指定局部特征图,并将所述当前局部特征图对应的所述指定局部特征图作为所述目标样本特征图。In the case where it is determined that the next local feature map corresponding to the current local feature map is the last one of the multiple local feature maps, obtain the current local feature map, and perform a convolution operation on the current local feature map to obtain a volume The specified local feature map after product is used, and the specified local feature map corresponding to the current local feature map is used as the target sample feature map. 8.根据权利要求4所述的方法,其特征在于,所述自底向上子网络的输出端与所述分类网络的输入端耦合,所述预设初始模型,用于:8. The method according to claim 4, wherein the output end of the bottom-up sub-network is coupled with the input end of the classification network, and the preset initial model is used for: 通过所述自底向上子网络向所述分类网络输入最小尺度样本特征图;Input the minimum scale sample feature map to the classification network through the bottom-up sub-network; 通过所述分类网络将所述最小尺度样本特征图划分为多个锚框,确定每个锚框属于所述车道线的第二概率,并根据所述第二概率确定所述车道线检测样本图像中车道线的第二预测位置。The minimum scale sample feature map is divided into multiple anchor boxes by the classification network, a second probability that each anchor box belongs to the lane line is determined, and the lane line detection sample image is determined according to the second probability The second predicted location of the center lane line. 9.根据权利要求2-8中任一项所述的方法,其特征在于,将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置,包括:9 . The method according to claim 2 , wherein the target detection image is input into a preset lane line detection model, so that the preset lane line detection model outputs the target. 10 . Detect the target location of lane lines in an image, including: 通过所述特征提取网络获取所述目标检测图像对应的目标特征图;Obtain the target feature map corresponding to the target detection image through the feature extraction network; 将所述目标特征图输入所述分类网络,以得到所述分类网络输出的所述目标检测图像中车道线的所述目标位置。The target feature map is input into the classification network to obtain the target position of the lane line in the target detection image output by the classification network. 10.一种车道线检测装置,其特征在于,所述装置包括:10. A lane line detection device, wherein the device comprises: 获取模块,被配置为获取目标检测图像;an acquisition module, configured to acquire a target detection image; 确定模块,被配置为将所述目标检测图像输入预设车道线检测模型中,以使所述预设车道线检测模型输出所述目标检测图像中车道线的目标位置;a determination module, configured to input the target detection image into a preset lane line detection model, so that the preset lane line detection model outputs the target position of the lane line in the target detection image; 其中,所述预设车道线检测模型通过以下方式训练得到:Wherein, the preset lane line detection model is obtained by training in the following ways: 获取多个车道线检测样本图像,所述车道线检测图像包括车道线标注位置;acquiring a plurality of lane line detection sample images, wherein the lane line detection images include lane line marked positions; 将每个所述车道线检测样本图像输入预设初始模型,所述预设初始模型包括分割网络和分类网络,所述分割网络用于确定车道线检测样本图像中车道线的第一预测位置,所述分类网络用于确定车道线检测样本图像中车道线的第二预测位置;inputting each of the lane line detection sample images into a preset initial model, where the preset initial model includes a segmentation network and a classification network, and the segmentation network is used to determine the first predicted position of the lane line in the lane line detection sample image, The classification network is used to determine the second predicted position of the lane line in the lane line detection sample image; 根据所述第一预测位置,所述第二预测位置和所述车道线标注位置对所述预设初始模型进行训练,以得到所述预设车道线检测模型。The preset initial model is trained according to the first predicted position, the second predicted position and the lane line marking position to obtain the preset lane line detection model. 11.一种车辆,其特征在于,所述车辆包括以上权利要求10所述的车道线检测装置。11. A vehicle, characterized in that the vehicle comprises the lane line detection device of claim 10 above. 12.一种车道线检测装置,其特征在于,包括:12. A lane line detection device, characterized in that, comprising: 存储器,其上存储有计算机程序;a memory on which a computer program is stored; 处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-9中任一项所述方法的步骤。A processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-9. 13.一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该程序指令被处理器执行时实现权利要求1-9中任一项所述方法的步骤。13. A computer-readable storage medium on which computer program instructions are stored, characterized in that, when the program instructions are executed by a processor, the steps of the method according to any one of claims 1-9 are implemented.
CN202210444361.XA 2022-04-25 2022-04-25 Lane detection method, device, vehicle and storage medium Active CN114863392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210444361.XA CN114863392B (en) 2022-04-25 2022-04-25 Lane detection method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210444361.XA CN114863392B (en) 2022-04-25 2022-04-25 Lane detection method, device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN114863392A true CN114863392A (en) 2022-08-05
CN114863392B CN114863392B (en) 2025-01-24

Family

ID=82634163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210444361.XA Active CN114863392B (en) 2022-04-25 2022-04-25 Lane detection method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN114863392B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523550A (en) * 2023-11-22 2024-02-06 中化现代农业有限公司 Apple pest detection method, apple pest detection device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020103893A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line property detection method, device, electronic apparatus, and readable storage medium
CN111914596A (en) * 2019-05-09 2020-11-10 北京四维图新科技股份有限公司 Lane line detection method, device, system and storage medium
CN112528878A (en) * 2020-12-15 2021-03-19 中国科学院深圳先进技术研究院 Method and device for detecting lane line, terminal device and readable storage medium
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020103893A1 (en) * 2018-11-21 2020-05-28 北京市商汤科技开发有限公司 Lane line property detection method, device, electronic apparatus, and readable storage medium
CN111914596A (en) * 2019-05-09 2020-11-10 北京四维图新科技股份有限公司 Lane line detection method, device, system and storage medium
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device
CN112528878A (en) * 2020-12-15 2021-03-19 中国科学院深圳先进技术研究院 Method and device for detecting lane line, terminal device and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523550A (en) * 2023-11-22 2024-02-06 中化现代农业有限公司 Apple pest detection method, apple pest detection device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114863392B (en) 2025-01-24

Similar Documents

Publication Publication Date Title
CN108629354B (en) Target detection method and device
TWI759647B (en) Image processing method, electronic device, and computer-readable storage medium
CN106651955B (en) Method and device for positioning target object in picture
EP3226204B1 (en) Method and apparatus for intelligently capturing image
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
US20220392202A1 (en) Imaging processing method and apparatus, electronic device, and storage medium
US10007841B2 (en) Human face recognition method, apparatus and terminal
CN107480665B (en) Character detection method and device and computer readable storage medium
CN108010060B (en) Target detection method and device
CN114267041B (en) Method and device for identifying object in scene
CN106557759B (en) Signpost information acquisition method and device
CN112990197A (en) License plate recognition method and device, electronic equipment and storage medium
CN112200040A (en) Occlusion image detection method, device and medium
WO2023230927A1 (en) Image processing method and device, and readable storage medium
CN113313115A (en) License plate attribute identification method and device, electronic equipment and storage medium
WO2022099988A1 (en) Object tracking method and apparatus, electronic device, and storage medium
CN112330721B (en) Three-dimensional coordinate recovery method and device, electronic equipment and storage medium
CN114821573B (en) Target detection method, device, storage medium, electronic device and vehicle
CN105513045A (en) Image processing method, device and terminal
WO2023155350A1 (en) Crowd positioning method and apparatus, electronic device, and storage medium
CN114863392A (en) Lane line detection method, device, vehicle and storage medium
CN108038870A (en) The method, apparatus and readable storage medium storing program for executing of object tracking
CN116434016B (en) Image information enhancement method, model training method, device, equipment and medium
US12333824B2 (en) Object detection method and apparatus for vehicle, device, vehicle and medium
CN114842457B (en) Model training and feature extraction method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant